Classificação de Aparelhos


Análise exploratória da base REDD, aplicando Classificação de Séries Temporais baseada em Aprendizado Supervisionado.

A base REDD contempla dados de consumo energético de 6 casas distintas. Primeiro, após a análise exploratória, será treinado e testado um modelo a partir dos dados da residência 1 e avaliado o quão bem esse modelo generaliza para os padrões não-observados da residência 2.

Imporando pacotes e lendo dados das residências 1 e 2 (com rotulação)


In [11]:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import display
import datetime
import time, os
import math
import warnings
warnings.filterwarnings("ignore")
import glob
import nilmtk

In [2]:
PATH_DATASET = './datasets/REDD/'
def read_label():
    label = {}
    for i in range(1, 7):
        hi = os.path.join(PATH_DATASET, 'low_freq/house_{}/labels.dat').format(i)
        label[i] = {}
        with open(hi) as f:
            for line in f:
                splitted_line = line.split(' ')
                label[i][int(splitted_line[0])] = splitted_line[1].strip() + '_' + splitted_line[0]
    return label
labels = read_label()
for i in range(1,3):
    print('Residência {}: '.format(i), labels[i], '\n')


Residência 1:  {1: 'mains_1', 2: 'mains_2', 3: 'oven_3', 4: 'oven_4', 5: 'refrigerator_5', 6: 'dishwaser_6', 7: 'kitchen_outlets_7', 8: 'kitchen_outlets_8', 9: 'lighting_9', 10: 'washer_dryer_10', 11: 'microwave_11', 12: 'bathroom_gfi_12', 13: 'electric_heat_13', 14: 'stove_14', 15: 'kitchen_outlets_15', 16: 'kitchen_outlets_16', 17: 'lighting_17', 18: 'lighting_18', 19: 'washer_dryer_19', 20: 'washer_dryer_20'} 

Residência 2:  {1: 'mains_1', 2: 'mains_2', 3: 'kitchen_outlets_3', 4: 'lighting_4', 5: 'stove_5', 6: 'microwave_6', 7: 'washer_dryer_7', 8: 'kitchen_outlets_8', 9: 'refrigerator_9', 10: 'dishwaser_10', 11: 'disposal_11'} 


In [3]:
def read_merge_data(house):
    path = os.path.join(PATH_DATASET, 'low_freq/house_{}/').format(house)
    file = path + 'channel_1.dat'
    df = pd.read_table(file, sep = ' ', names = ['unix_time', labels[house][1]], 
                                       dtype = {'unix_time': 'int64', labels[house][1]:'float64'}) 
    
    num_apps = len(glob.glob(path + 'channel*'))
    for i in range(2, num_apps + 1):
        file = path + 'channel_{}.dat'.format(i)
        data = pd.read_table(file, sep = ' ', names = ['unix_time', labels[house][i]], 
                                       dtype = {'unix_time': 'int64', labels[house][i]:'float64'})
        df = pd.merge(df, data, how = 'inner', on = 'unix_time')
    df['timestamp'] = df['unix_time'].astype("datetime64[s]")
    df = df.set_index(df['timestamp'].values)
    df.drop(['unix_time','timestamp'], axis=1, inplace=True)
    return df
df = {}
for i in range(1,3):
    df[i] = read_merge_data(i)

In [4]:
for i in range(1,3):
    print('Shape dos dados da Residência {}: '.format(i), df[i].shape)
    display(df[i].tail(3))


Shape dos dados da Residência 1:  (406748, 20)
mains_1 mains_2 oven_3 oven_4 refrigerator_5 dishwaser_6 kitchen_outlets_7 kitchen_outlets_8 lighting_9 washer_dryer_10 microwave_11 bathroom_gfi_12 electric_heat_13 stove_14 kitchen_outlets_15 kitchen_outlets_16 lighting_17 lighting_18 washer_dryer_19 washer_dryer_20
2011-05-24 19:56:27 235.46 38.61 0.0 0.0 190.0 0.0 24.0 20.0 2.0 0.0 4.0 1.0 0.0 0.0 1.0 0.0 0.0 1.0 0.0 0.0
2011-05-24 19:56:30 235.98 38.77 0.0 0.0 189.0 0.0 24.0 20.0 2.0 0.0 4.0 1.0 0.0 0.0 1.0 0.0 0.0 1.0 0.0 0.0
2011-05-24 19:56:34 235.29 38.83 0.0 0.0 186.0 0.0 26.0 20.0 2.0 0.0 4.0 1.0 0.0 0.0 1.0 0.0 0.0 1.0 0.0 0.0
Shape dos dados da Residência 2:  (316840, 11)
mains_1 mains_2 kitchen_outlets_3 lighting_4 stove_5 microwave_6 washer_dryer_7 kitchen_outlets_8 refrigerator_9 dishwaser_10 disposal_11
2011-05-22 23:59:01 10.84 252.61 0.0 9.0 0.0 5.0 0.0 2.0 158.0 0.0 0.0
2011-05-22 23:59:04 10.88 253.02 0.0 9.0 0.0 4.0 0.0 2.0 160.0 0.0 0.0
2011-05-22 23:59:08 10.84 252.77 0.0 9.0 0.0 4.0 0.0 2.0 157.0 0.0 0.0

In [5]:
dates = {}
for i in range(1,3):
    dates[i] = [str(time)[:10] for time in df[i].index.values]
    dates[i] = sorted(list(set(dates[i])))
    print('Os dados da Residência {0} contém medições de {1} dia(s) (de {2} a {3}).'.format(i,len(dates[i]),dates[i][0], dates[i][-1]))
    print(dates[i], '\n')


Os dados da Residência 1 contém medições de 23 dia(s) (de 2011-04-18 a 2011-05-24).
['2011-04-18', '2011-04-19', '2011-04-20', '2011-04-21', '2011-04-22', '2011-04-23', '2011-04-24', '2011-04-25', '2011-04-26', '2011-04-27', '2011-04-28', '2011-04-30', '2011-05-01', '2011-05-02', '2011-05-03', '2011-05-06', '2011-05-07', '2011-05-11', '2011-05-12', '2011-05-13', '2011-05-22', '2011-05-23', '2011-05-24'] 

Os dados da Residência 2 contém medições de 16 dia(s) (de 2011-04-18 a 2011-05-22).
['2011-04-18', '2011-04-19', '2011-04-20', '2011-04-21', '2011-04-22', '2011-04-23', '2011-04-24', '2011-04-25', '2011-04-26', '2011-04-27', '2011-04-28', '2011-04-29', '2011-04-30', '2011-05-01', '2011-05-02', '2011-05-22'] 


In [6]:
# Plotar 2 primeiros dias de dados das residências 1 e 2
def plot_df(df, title):
    apps = df.columns.values
    num_apps = len(apps) 
    fig, axes = plt.subplots((num_apps+1)//2,2, figsize=(24, num_apps*2) )
    for i, key in enumerate(apps):
        axes.flat[i].plot(df[key], alpha = 0.6)
        axes.flat[i].set_title(key, fontsize = '15')
    plt.suptitle(title, fontsize = '30')
    fig.tight_layout()
    fig.subplots_adjust(top=0.95)

for i in range(1,3):
    plot_df(df[i].ix[:dates[i][1]], 'Registros dos 2 primeiros dias da Residência {}'.format(i))



In [7]:
# Plot total energy sonsumption of each appliance from two houses
fig, axes = plt.subplots(1,2,figsize=(24, 10))
plt.suptitle('Energia total consumida por cada aparelho', fontsize = 30)
cons1 = df[1][df[1].columns.values[2:]].sum().sort_values(ascending=False)
app1 = cons1.index
y_pos1 = np.arange(len(app1))
axes[0].bar(y_pos1, cons1.values,  alpha=0.6) 
plt.sca(axes[0])
plt.xticks(y_pos1, app1, rotation = 90, fontsize=16)
plt.title('Residência 1')

cons2 = df[2][df[2].columns.values[2:]].sum().sort_values(ascending=False)
app2 = cons2.index
y_pos2 = np.arange(len(app2))
axes[1].bar(y_pos2, cons2.values, alpha=0.6)
plt.sca(axes[1])
plt.xticks(y_pos2, app2, rotation = 90, fontsize=16)
plt.title('Residência 2')


Out[7]:
Text(0.5, 1.0, 'Residência 2')

Modelo #1: Árvore de Decisão (Regression Tree)


Treino e Teste na Residência 1


In [8]:
# Split de treino, teste e validação
df1_train = df[1].ix[:dates[1][10]]
df1_val = df[1].ix[dates[1][11]:dates[1][16]]
df1_test = df[1].ix[dates[1][17]:]
print('df_train.shape: ', df1_train.shape)
print('df_val.shape: ', df1_val.shape)
print('df_test.shape: ', df1_test.shape)


df_train.shape:  (214816, 20)
df_val.shape:  (104875, 20)
df_test.shape:  (87057, 20)

In [9]:
# Exemplo de base, com X = ('mains_1','mains_2') e Y = (refrigerator_5)
# A previsão (desagregação) da corrente é na vertical
df_sample = df1_val[['mains_1','mains_2','refrigerator_5']]
df_sample.head(10)


Out[9]:
mains_1 mains_2 refrigerator_5
2011-04-30 03:10:38 191.78 121.60 6.0
2011-04-30 03:10:57 191.78 121.58 7.0
2011-04-30 03:11:00 194.35 120.95 6.0
2011-04-30 03:11:04 193.80 121.29 6.0
2011-04-30 03:11:07 191.54 121.56 7.0
2011-04-30 03:11:10 190.55 120.95 7.0
2011-04-30 03:11:14 190.96 121.12 6.0
2011-04-30 03:11:17 191.45 121.88 7.0
2011-04-30 03:11:21 191.98 121.53 7.0
2011-04-30 03:11:24 190.55 121.34 7.0

In [10]:
print('Dias compreendidos na leitura/desagregação:')
set([str(dt).split(' ')[0] for dt in df_sample.index])


Dias compreendidos na leitura/desagregação:
Out[10]:
{'2011-04-30',
 '2011-05-01',
 '2011-05-02',
 '2011-05-03',
 '2011-05-06',
 '2011-05-07'}

In [11]:
# Usando a corrente 1 e 2 (variaveis independetes) para a previsão do refrigerador (variavel dependente)
X_train1 = df1_train[['mains_1','mains_2']].values
y_train1 = df1_train['refrigerator_5'].values
X_val1 = df1_val[['mains_1','mains_2']].values
y_val1 = df1_val['refrigerator_5'].values
X_test1 = df1_test[['mains_1','mains_2']].values
y_test1 = df1_test['refrigerator_5'].values

print(
    X_train1.shape, y_train1.shape, 
    X_val1.shape, y_val1.shape, 
    X_test1.shape, y_test1.shape
)


(214816, 2) (214816,) (104875, 2) (104875,) (87057, 2) (87057,)

In [12]:
# Metrcas de avaliação da regressão
def mse_loss(y_predict, y):
    return np.mean(np.square(y_predict - y)) 
def mae_loss(y_predict, y):
    return np.mean(np.abs(y_predict - y)) 

# Serão usados os dados de validação para ajustar o parâmetros min_samples_split
min_samples_split = np.arange(2, 400, 10)

# Treinando o modelo
from sklearn.tree import DecisionTreeRegressor
def tree_reg(X_train, y_train, X_val, y_val, min_samples_split):
    clfs = []
    losses = []
    start = time.time()
    for split in min_samples_split:
        clf = DecisionTreeRegressor(min_samples_split = split)
        clf.fit(X_train, y_train)
        y_predict_val = clf.predict(X_val)
        clfs.append(clf)
        losses.append( mse_loss(y_predict_val, y_val) )
    print('Tempo de execução (s): ', round(time.time() - start, 0))
    return clfs, losses
tree_clfs_1, tree_losses_1 = tree_reg(X_train1, y_train1, X_val1, y_val1, min_samples_split)


Tempo de execução (s):  32.0

In [13]:
def plot_losses(losses, min_samples_split):
    index = np.arange(len(min_samples_split))
    bar_width = 0.4
    opacity = 0.35

    plt.bar(index, losses, bar_width, alpha=opacity, color='b')
    plt.xlabel('min_samples_split', fontsize=30)
    plt.ylabel('loss', fontsize=30)
    plt.title('Loss (Validação) x min_samples_split', fontsize = '25')
    plt.xticks(index + bar_width/2, min_samples_split, fontsize=20 )
    plt.yticks(fontsize=20 )
    plt.rcParams["figure.figsize"] = [24,15]
    plt.tight_layout()

plot_losses(tree_losses_1, min_samples_split)



In [14]:
# Escolhendo o melhor modelo (minsplit x loss) e prevendo o consumo do refrigerador no conjunto de teste
ind = np.argmin(tree_losses_1)
tree_clf_1 = tree_clfs_1[ind]

y_test_predict_1 = tree_clf_1.predict(X_test1)
mse_tree_1 = mse_loss(y_test_predict_1, y_test1)
mae_tree_1 = mae_loss(y_test_predict_1, y_test1)
print('MSE no Conjunto de Teste:', mse_tree_1)
print('MAE no Conjunto de Teste:', mae_tree_1)


MSE no Conjunto de Teste: 1634.5797666188705
MAE no Conjunto de Teste: 12.686127417077758

In [15]:
# Plotando os cnsumos REAL e o PREVISTO do refrigerador nos 6 dias dos dados de teste
def plot_each_app(df, dates, predict, y_test, title, look_back = 0):
    num_date = len(dates)
    fig, axes = plt.subplots(num_date,1,figsize=(24, num_date*5) )
    plt.suptitle(title, fontsize = '25')
    fig.tight_layout()
    fig.subplots_adjust(top=0.95)
    for i in range(num_date):
        if i == 0: l = 0
        ind = df.ix[dates[i]].index[look_back:]
        axes.flat[i].plot(ind, y_test[l:l+len(ind)], color = 'blue', alpha = 0.6, label = 'REAL')
        axes.flat[i].plot(ind, predict[l:l+len(ind)], color = 'red', alpha = 0.6, label = 'PREVISTO')
        axes.flat[i].legend()
        l = len(ind)

plot_each_app(df1_test, dates[1][17:], y_test_predict_1, y_test1, 'Consumo Real/Previsto do refrigerador nos 6 dias da Residência 1')


Validando o desempenho do modelo treinado na Residência 1 para desagregar a energia do refrigerador da Residência 2


In [29]:
X_2 = df[2][['mains_2','mains_1']].values # Mesmas variáveis independentes, correntes 1 e 2)
y_2 = df[2]['refrigerator_9'].values
print(X_2.shape, y_2.shape)


(316840, 2) (316840,)

In [30]:
y_predict_2 = tree_clf_1.predict(X_2)
mse_tree_2 = mse_loss(y_predict_2, y_2)
mae_tree_2 = mae_loss(y_predict_2, y_2)
print('MSE no Conjunto de Teste:', mse_tree_2)
print('MAE no Conjunto de Teste:', mae_tree_2)


MSE no Conjunto de Teste: 32245.25362228206
MAE no Conjunto de Teste: 64.75419454670589

In [31]:
plot_each_app(df[2], dates[2], y_predict_2, y_2, 'Modelo de Árvore de Decisão aplicado ao Refrigerador: treinado na Res. 1, prevendo na Res. 2')


Avaliando o desempenho do modelo para previsão de consumo de outros aparelhos na residência 1


In [36]:
# Lista de outros aparelhos da res. 1
appliances = list(df[1].columns.values[2:])
appliances.pop(2)
print(appliances)


['oven_3', 'oven_4', 'dishwaser_6', 'kitchen_outlets_7', 'kitchen_outlets_8', 'lighting_9', 'washer_dryer_10', 'microwave_11', 'bathroom_gfi_12', 'electric_heat_13', 'stove_14', 'kitchen_outlets_15', 'kitchen_outlets_16', 'lighting_17', 'lighting_18', 'washer_dryer_19', 'washer_dryer_20']

In [37]:
# Treinando o modelo de Árvore para os outros aparelhos
def tree_reg_mult_apps():
    start = time.time()
    min_samples_split=np.arange(2, 400, 10)
    pred = {}
    for app in appliances:
        list_clfs = []
        losses = []
        y_train = df1_train[app].values
        y_val = df1_val[app].values
        for split in min_samples_split:
            clf = DecisionTreeRegressor(min_samples_split = split)
            clf.fit(X_train1, y_train)
            y_predict_val = clf.predict(X_val1)
            list_clfs.append(clf)
            losses.append( mse_loss(y_predict_val, y_val) )
        ind = np.argmin(losses)
        pred[app] = list_clfs[ind].predict(X_test1)
    print('Tempo de execução (s): ', round(time.time() - start, 0))
    return pred

mul_pred = tree_reg_mult_apps()


Tempo de execução:  326.189656496048

In [41]:
# Calculando erros/perdas (losses) de múltiplos aparelhos
def error_mul_app(mul_pred):
    mse_losses = {}
    mae_losses = {}
    for app in appliances:
        mse_losses[app] = mse_loss(mul_pred[app], df1_test[app].values)
        mae_losses[app] = mae_loss(mul_pred[app], df1_test[app].values)
    return mse_losses, mae_losses
mul_mse_tree, mul_mae_tree = error_mul_app(mul_pred)

In [42]:
for app in appliances:
    m = np.mean(df1_test[app].values)
    print('Consumo Médio de {0}: {1:.2f} - MSE: {2:.2f} - MAE: {3:.2f}'.format(app, m ,mul_mse_tree[app], mul_mae_tree[app]))


Consumo Médio de oven_3: 15.63 - MSE: 18555.36 - MAE: 11.30
Consumo Médio de oven_4: 17.11 - MSE: 7454.75 - MAE: 4.92
Consumo Médio de dishwaser_6: 25.35 - MSE: 831.49 - MAE: 3.38
Consumo Médio de kitchen_outlets_7: 21.25 - MSE: 4.54 - MAE: 1.59
Consumo Médio de kitchen_outlets_8: 27.71 - MSE: 99.51 - MAE: 3.43
Consumo Médio de lighting_9: 28.29 - MSE: 1596.25 - MAE: 24.02
Consumo Médio de washer_dryer_10: 3.07 - MSE: 934.44 - MAE: 2.44
Consumo Médio de microwave_11: 18.92 - MSE: 12442.92 - MAE: 13.00
Consumo Médio de bathroom_gfi_12: 6.73 - MSE: 3471.07 - MAE: 3.44
Consumo Médio de electric_heat_13: 0.11 - MSE: 0.53 - MAE: 0.05
Consumo Médio de stove_14: 0.10 - MSE: 0.23 - MAE: 0.04
Consumo Médio de kitchen_outlets_15: 5.34 - MSE: 832.75 - MAE: 1.63
Consumo Médio de kitchen_outlets_16: 1.93 - MSE: 802.91 - MAE: 0.64
Consumo Médio de lighting_17: 18.97 - MSE: 136.66 - MAE: 3.18
Consumo Médio de lighting_18: 15.68 - MSE: 383.82 - MAE: 13.12
Consumo Médio de washer_dryer_19: 0.00 - MSE: 0.00 - MAE: 0.00
Consumo Médio de washer_dryer_20: 27.54 - MSE: 2087.58 - MAE: 1.79

In [54]:
for app in appliances:
    plot_each_app(df1_test, dates[1][17:], mul_pred[app], df1_test[app].values, 
                  '{} - Real e Previsto nos 6 dias de dados da Residência 1'.format(app))


Modelo #2: Rede Neural Artificial (Full-connected)


Para rápida prototipação, utilizaremos o Keras (c/ backend TensorFlow)


In [20]:
from keras.layers.core import Dense, Activation, Dropout
from keras.layers.recurrent import LSTM
from keras.models import Sequential
from keras.callbacks import ModelCheckpoint
from keras.models import load_model
from keras.optimizers import Adam
from keras.regularizers import l2
from keras.utils import plot_model

In [21]:
def build_fc_model(layers):
    fc_model = Sequential()
    for i in range(len(layers)-1):
        fc_model.add( Dense(input_dim=layers[i], output_dim= layers[i+1]) )#, W_regularizer=l2(0.1)) )
        fc_model.add( Dropout(0.5) )
        if i < (len(layers) - 2):
            fc_model.add( Activation('relu') )
    fc_model.summary()
    plot_model(fc_model)
    return fc_model
fc_model_1 = build_fc_model([2, 256, 512, 1024, 1])


_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_5 (Dense)              (None, 256)               768       
_________________________________________________________________
dropout_5 (Dropout)          (None, 256)               0         
_________________________________________________________________
activation_4 (Activation)    (None, 256)               0         
_________________________________________________________________
dense_6 (Dense)              (None, 512)               131584    
_________________________________________________________________
dropout_6 (Dropout)          (None, 512)               0         
_________________________________________________________________
activation_5 (Activation)    (None, 512)               0         
_________________________________________________________________
dense_7 (Dense)              (None, 1024)              525312    
_________________________________________________________________
dropout_7 (Dropout)          (None, 1024)              0         
_________________________________________________________________
activation_6 (Activation)    (None, 1024)              0         
_________________________________________________________________
dense_8 (Dense)              (None, 1)                 1025      
_________________________________________________________________
dropout_8 (Dropout)          (None, 1)                 0         
=================================================================
Total params: 658,689
Trainable params: 658,689
Non-trainable params: 0
_________________________________________________________________

In [22]:
adam = Adam(lr = 1e-5)
fc_model_1.compile(loss='mean_squared_error', optimizer=adam)
start = time.time()
model_path = "./resources/ann-fc_refrig_h1_2.hdf5"
checkpointer = ModelCheckpoint(filepath="./resources/ann-fc_refrig_h1_2.hdf5", verbose=0, save_best_only=True)
hist_fc_1 = fc_model_1.fit( X_train1, y_train1,
                    batch_size=512, verbose=1, nb_epoch=200,
                    validation_split=0.33, callbacks=[checkpointer])
print('Tempo total de treinamento do modelo (s):', round(time.time() - start, 0))


Train on 143926 samples, validate on 70890 samples
Epoch 1/200
143926/143926 [==============================] - 5s 32us/step - loss: 13200.1076 - val_loss: 9414.4429
Epoch 2/200
143926/143926 [==============================] - 4s 25us/step - loss: 12222.5034 - val_loss: 9148.9905
Epoch 3/200
143926/143926 [==============================] - 3s 24us/step - loss: 12110.9068 - val_loss: 9258.5751
Epoch 4/200
143926/143926 [==============================] - 3s 24us/step - loss: 11626.9671 - val_loss: 9233.5125
Epoch 5/200
143926/143926 [==============================] - 3s 24us/step - loss: 11367.6411 - val_loss: 9264.2796
Epoch 6/200
143926/143926 [==============================] - 3s 24us/step - loss: 11258.3505 - val_loss: 9310.5611
Epoch 7/200
143926/143926 [==============================] - 4s 24us/step - loss: 11071.8485 - val_loss: 9398.7222
Epoch 8/200
143926/143926 [==============================] - 4s 24us/step - loss: 10878.0528 - val_loss: 9418.9764
Epoch 9/200
143926/143926 [==============================] - 4s 25us/step - loss: 10741.2017 - val_loss: 9447.8376
Epoch 10/200
143926/143926 [==============================] - 4s 25us/step - loss: 10467.2157 - val_loss: 9505.2691
Epoch 11/200
143926/143926 [==============================] - 4s 24us/step - loss: 10516.3991 - val_loss: 9458.1933
Epoch 12/200
143926/143926 [==============================] - 3s 24us/step - loss: 10358.5384 - val_loss: 9472.2277
Epoch 13/200
143926/143926 [==============================] - 3s 24us/step - loss: 10212.6927 - val_loss: 9531.0471
Epoch 14/200
143926/143926 [==============================] - 3s 24us/step - loss: 10230.2810 - val_loss: 9606.1874
Epoch 15/200
143926/143926 [==============================] - 4s 24us/step - loss: 10066.7433 - val_loss: 9574.8837
Epoch 16/200
143926/143926 [==============================] - 3s 24us/step - loss: 10020.0900 - val_loss: 9610.7435
Epoch 17/200
143926/143926 [==============================] - 3s 24us/step - loss: 9977.3650 - val_loss: 9621.1006
Epoch 18/200
143926/143926 [==============================] - 4s 24us/step - loss: 9985.6090 - val_loss: 9660.7163
Epoch 19/200
143926/143926 [==============================] - 4s 24us/step - loss: 9816.2884 - val_loss: 9613.6055
Epoch 20/200
143926/143926 [==============================] - 3s 24us/step - loss: 9879.2607 - val_loss: 9661.2805
Epoch 21/200
143926/143926 [==============================] - 4s 24us/step - loss: 9784.4043 - val_loss: 9677.3890
Epoch 22/200
143926/143926 [==============================] - 4s 25us/step - loss: 9767.9886 - val_loss: 9670.9334
Epoch 23/200
143926/143926 [==============================] - 4s 24us/step - loss: 9653.1937 - val_loss: 9682.2678
Epoch 24/200
143926/143926 [==============================] - 3s 24us/step - loss: 9652.2677 - val_loss: 9735.6943
Epoch 25/200
143926/143926 [==============================] - 3s 24us/step - loss: 9653.2288 - val_loss: 9754.4638
Epoch 26/200
143926/143926 [==============================] - 3s 24us/step - loss: 9603.9161 - val_loss: 9736.0527
Epoch 27/200
143926/143926 [==============================] - 3s 24us/step - loss: 9532.8602 - val_loss: 9752.4828
Epoch 28/200
143926/143926 [==============================] - 4s 25us/step - loss: 9610.6418 - val_loss: 9792.1857
Epoch 29/200
143926/143926 [==============================] - 3s 24us/step - loss: 9527.8398 - val_loss: 9818.9683
Epoch 30/200
143926/143926 [==============================] - 3s 24us/step - loss: 9483.8356 - val_loss: 9832.0550
Epoch 31/200
143926/143926 [==============================] - 3s 24us/step - loss: 9482.5732 - val_loss: 9820.9507
Epoch 32/200
143926/143926 [==============================] - 4s 25us/step - loss: 9484.9121 - val_loss: 9859.4783
Epoch 33/200
143926/143926 [==============================] - 3s 24us/step - loss: 9455.6702 - val_loss: 9889.1386
Epoch 34/200
143926/143926 [==============================] - 3s 24us/step - loss: 9483.7291 - val_loss: 9902.3727
Epoch 35/200
143926/143926 [==============================] - 3s 24us/step - loss: 9392.0504 - val_loss: 9842.3534
Epoch 36/200
143926/143926 [==============================] - 3s 24us/step - loss: 9428.8003 - val_loss: 9855.3912
Epoch 37/200
143926/143926 [==============================] - 4s 24us/step - loss: 9374.6799 - val_loss: 9890.9175
Epoch 38/200
143926/143926 [==============================] - 3s 24us/step - loss: 9363.6940 - val_loss: 9895.9560
Epoch 39/200
143926/143926 [==============================] - 3s 24us/step - loss: 9328.6327 - val_loss: 9891.8713
Epoch 40/200
143926/143926 [==============================] - 3s 24us/step - loss: 9346.2779 - val_loss: 9907.4668
Epoch 41/200
143926/143926 [==============================] - 3s 24us/step - loss: 9273.5809 - val_loss: 9849.2371
Epoch 42/200
143926/143926 [==============================] - 3s 24us/step - loss: 9334.5657 - val_loss: 9895.6605
Epoch 43/200
143926/143926 [==============================] - 3s 24us/step - loss: 9312.0723 - val_loss: 9945.0220
Epoch 44/200
143926/143926 [==============================] - 3s 24us/step - loss: 9282.1040 - val_loss: 9920.1003
Epoch 45/200
143926/143926 [==============================] - 3s 24us/step - loss: 9262.3916 - val_loss: 9895.2283
Epoch 46/200
143926/143926 [==============================] - 3s 24us/step - loss: 9294.9821 - val_loss: 9924.2721
Epoch 47/200
143926/143926 [==============================] - 4s 24us/step - loss: 9264.5788 - val_loss: 9924.3746
Epoch 48/200
143926/143926 [==============================] - 4s 24us/step - loss: 9222.4955 - val_loss: 9903.2141
Epoch 49/200
143926/143926 [==============================] - 4s 25us/step - loss: 9228.0453 - val_loss: 9884.0171
Epoch 50/200
143926/143926 [==============================] - 3s 24us/step - loss: 9236.1501 - val_loss: 9900.6317
Epoch 51/200
143926/143926 [==============================] - 3s 24us/step - loss: 9192.6747 - val_loss: 9876.9511
Epoch 52/200
143926/143926 [==============================] - 4s 25us/step - loss: 9220.6492 - val_loss: 9904.2476
Epoch 53/200
143926/143926 [==============================] - 4s 25us/step - loss: 9133.1676 - val_loss: 9881.3201
Epoch 54/200
143926/143926 [==============================] - 4s 25us/step - loss: 9170.1478 - val_loss: 9878.3716
Epoch 55/200
143926/143926 [==============================] - 4s 25us/step - loss: 9148.7507 - val_loss: 9869.6282
Epoch 56/200
143926/143926 [==============================] - 4s 26us/step - loss: 9159.4990 - val_loss: 9867.2132
Epoch 57/200
143926/143926 [==============================] - 4s 25us/step - loss: 9162.7166 - val_loss: 9876.7865
Epoch 58/200
143926/143926 [==============================] - 4s 25us/step - loss: 9177.7516 - val_loss: 9833.7816
Epoch 59/200
143926/143926 [==============================] - 3s 24us/step - loss: 9112.5855 - val_loss: 9783.1086
Epoch 60/200
143926/143926 [==============================] - 3s 24us/step - loss: 9119.5878 - val_loss: 9798.4726
Epoch 61/200
143926/143926 [==============================] - 3s 24us/step - loss: 9087.7558 - val_loss: 9802.0329
Epoch 62/200
143926/143926 [==============================] - 3s 24us/step - loss: 9082.3075 - val_loss: 9813.2741
Epoch 63/200
143926/143926 [==============================] - 3s 24us/step - loss: 9093.1256 - val_loss: 9795.1413
Epoch 64/200
143926/143926 [==============================] - 4s 25us/step - loss: 9066.3673 - val_loss: 9791.7958
Epoch 65/200
143926/143926 [==============================] - 3s 24us/step - loss: 9043.9223 - val_loss: 9795.3620
Epoch 66/200
143926/143926 [==============================] - 3s 24us/step - loss: 9051.9333 - val_loss: 9743.6024
Epoch 67/200
143926/143926 [==============================] - 3s 24us/step - loss: 9060.4057 - val_loss: 9763.7238
Epoch 68/200
143926/143926 [==============================] - 3s 24us/step - loss: 9058.6603 - val_loss: 9762.1611
Epoch 69/200
143926/143926 [==============================] - 4s 25us/step - loss: 9047.1248 - val_loss: 9765.7713
Epoch 70/200
143926/143926 [==============================] - 3s 24us/step - loss: 9027.3295 - val_loss: 9744.9121
Epoch 71/200
143926/143926 [==============================] - 3s 24us/step - loss: 8980.7086 - val_loss: 9724.7138
Epoch 72/200
143926/143926 [==============================] - 3s 24us/step - loss: 9033.5986 - val_loss: 9723.8381
Epoch 73/200
143926/143926 [==============================] - 3s 24us/step - loss: 8985.5494 - val_loss: 9658.4482
Epoch 74/200
143926/143926 [==============================] - 3s 24us/step - loss: 8996.1731 - val_loss: 9658.0525
Epoch 75/200
143926/143926 [==============================] - 4s 25us/step - loss: 8992.9491 - val_loss: 9649.6022
Epoch 76/200
143926/143926 [==============================] - 4s 25us/step - loss: 8999.3838 - val_loss: 9657.2002
Epoch 77/200
143926/143926 [==============================] - 3s 24us/step - loss: 8958.3211 - val_loss: 9600.2929
Epoch 78/200
143926/143926 [==============================] - 3s 24us/step - loss: 8976.1696 - val_loss: 9632.1745
Epoch 79/200
143926/143926 [==============================] - 3s 24us/step - loss: 8956.0972 - val_loss: 9588.3926
Epoch 80/200
143926/143926 [==============================] - 3s 24us/step - loss: 8946.6951 - val_loss: 9554.1621
Epoch 81/200
143926/143926 [==============================] - 3s 24us/step - loss: 8924.2205 - val_loss: 9545.1744
Epoch 82/200
143926/143926 [==============================] - 4s 25us/step - loss: 8881.1222 - val_loss: 9466.1639
Epoch 83/200
143926/143926 [==============================] - 3s 24us/step - loss: 8933.5432 - val_loss: 9515.3574
Epoch 84/200
143926/143926 [==============================] - 3s 24us/step - loss: 8871.4791 - val_loss: 9462.5486
Epoch 85/200
143926/143926 [==============================] - 3s 24us/step - loss: 8880.9764 - val_loss: 9446.1386
Epoch 86/200
143926/143926 [==============================] - 3s 24us/step - loss: 8825.2958 - val_loss: 9434.0035
Epoch 87/200
143926/143926 [==============================] - 3s 24us/step - loss: 8900.9261 - val_loss: 9419.0141
Epoch 88/200
143926/143926 [==============================] - 3s 24us/step - loss: 8861.8757 - val_loss: 9442.5392
Epoch 89/200
143926/143926 [==============================] - 4s 24us/step - loss: 8841.3165 - val_loss: 9421.7793
Epoch 90/200
143926/143926 [==============================] - 3s 24us/step - loss: 8901.4051 - val_loss: 9398.7694
Epoch 91/200
143926/143926 [==============================] - 4s 25us/step - loss: 8790.2374 - val_loss: 9375.9305
Epoch 92/200
143926/143926 [==============================] - 3s 24us/step - loss: 8843.8273 - val_loss: 9399.9492
Epoch 93/200
143926/143926 [==============================] - 3s 24us/step - loss: 8784.0132 - val_loss: 9359.6124
Epoch 94/200
143926/143926 [==============================] - 3s 24us/step - loss: 8817.8896 - val_loss: 9372.2468
Epoch 95/200
143926/143926 [==============================] - 3s 24us/step - loss: 8819.8393 - val_loss: 9328.9538
Epoch 96/200
143926/143926 [==============================] - 3s 24us/step - loss: 8800.0296 - val_loss: 9376.2302
Epoch 97/200
143926/143926 [==============================] - 3s 24us/step - loss: 8826.1626 - val_loss: 9372.5990
Epoch 98/200
143926/143926 [==============================] - 3s 24us/step - loss: 8754.8126 - val_loss: 9265.7316
Epoch 99/200
143926/143926 [==============================] - 3s 24us/step - loss: 8741.3119 - val_loss: 9293.6449
Epoch 100/200
143926/143926 [==============================] - 3s 24us/step - loss: 8754.4630 - val_loss: 9300.7637
Epoch 101/200
143926/143926 [==============================] - 3s 24us/step - loss: 8788.5412 - val_loss: 9285.6871
Epoch 102/200
143926/143926 [==============================] - 3s 24us/step - loss: 8819.0058 - val_loss: 9305.8762
Epoch 103/200
143926/143926 [==============================] - 3s 24us/step - loss: 8754.1102 - val_loss: 9269.7202
Epoch 104/200
143926/143926 [==============================] - 3s 24us/step - loss: 8710.7946 - val_loss: 9242.5185
Epoch 105/200
143926/143926 [==============================] - 3s 24us/step - loss: 8725.6954 - val_loss: 9211.9142
Epoch 106/200
143926/143926 [==============================] - 3s 24us/step - loss: 8678.5214 - val_loss: 9214.7233
Epoch 107/200
143926/143926 [==============================] - 3s 24us/step - loss: 8668.1553 - val_loss: 9140.7360
Epoch 108/200
143926/143926 [==============================] - 3s 24us/step - loss: 8712.7569 - val_loss: 9203.9299
Epoch 109/200
143926/143926 [==============================] - 3s 24us/step - loss: 8733.9707 - val_loss: 9147.7504
Epoch 110/200
143926/143926 [==============================] - 3s 24us/step - loss: 8704.5638 - val_loss: 9158.5302
Epoch 111/200
143926/143926 [==============================] - 3s 24us/step - loss: 8658.1036 - val_loss: 9106.1139
Epoch 112/200
143926/143926 [==============================] - 4s 25us/step - loss: 8697.0710 - val_loss: 9148.5366
Epoch 113/200
143926/143926 [==============================] - 4s 24us/step - loss: 8683.9536 - val_loss: 9120.4525
Epoch 114/200
143926/143926 [==============================] - 3s 24us/step - loss: 8700.0682 - val_loss: 9184.3661
Epoch 115/200
143926/143926 [==============================] - 3s 24us/step - loss: 8665.6807 - val_loss: 9125.4511
Epoch 116/200
143926/143926 [==============================] - 3s 24us/step - loss: 8675.2779 - val_loss: 9158.2888
Epoch 117/200
143926/143926 [==============================] - 3s 24us/step - loss: 8658.5707 - val_loss: 9112.4618
Epoch 118/200
143926/143926 [==============================] - 3s 24us/step - loss: 8650.8664 - val_loss: 9133.7556
Epoch 119/200
143926/143926 [==============================] - 3s 24us/step - loss: 8638.0545 - val_loss: 9114.7106
Epoch 120/200
143926/143926 [==============================] - 3s 24us/step - loss: 8636.5594 - val_loss: 9127.9343
Epoch 121/200
143926/143926 [==============================] - 4s 24us/step - loss: 8660.8494 - val_loss: 9173.0384
Epoch 122/200
143926/143926 [==============================] - 4s 25us/step - loss: 8609.4689 - val_loss: 9091.0868
Epoch 123/200
143926/143926 [==============================] - 4s 24us/step - loss: 8637.6831 - val_loss: 9091.0623
Epoch 124/200
143926/143926 [==============================] - 3s 24us/step - loss: 8671.9534 - val_loss: 9151.6720
Epoch 125/200
143926/143926 [==============================] - 3s 24us/step - loss: 8630.8712 - val_loss: 9107.9089
Epoch 126/200
143926/143926 [==============================] - 3s 24us/step - loss: 8608.8532 - val_loss: 9165.4061
Epoch 127/200
143926/143926 [==============================] - 3s 24us/step - loss: 8599.1036 - val_loss: 9108.1619
Epoch 128/200
143926/143926 [==============================] - 3s 24us/step - loss: 8574.9799 - val_loss: 9073.1303
Epoch 129/200
143926/143926 [==============================] - 4s 25us/step - loss: 8578.7892 - val_loss: 9083.3339
Epoch 130/200
143926/143926 [==============================] - 3s 24us/step - loss: 8580.4799 - val_loss: 9085.3119
Epoch 131/200
143926/143926 [==============================] - 3s 24us/step - loss: 8596.6552 - val_loss: 9118.1554
Epoch 132/200
143926/143926 [==============================] - 3s 24us/step - loss: 8588.5521 - val_loss: 9112.3616
Epoch 133/200
143926/143926 [==============================] - 3s 24us/step - loss: 8596.4770 - val_loss: 9135.8140
Epoch 134/200
143926/143926 [==============================] - 3s 24us/step - loss: 8544.0626 - val_loss: 9107.7138
Epoch 135/200
143926/143926 [==============================] - 3s 24us/step - loss: 8581.9549 - val_loss: 9088.0267
Epoch 136/200
143926/143926 [==============================] - 3s 24us/step - loss: 8561.2557 - val_loss: 9051.0144
Epoch 137/200
143926/143926 [==============================] - 3s 24us/step - loss: 8606.4391 - val_loss: 9081.6377
Epoch 138/200
143926/143926 [==============================] - 3s 24us/step - loss: 8588.0119 - val_loss: 9121.9951
Epoch 139/200
143926/143926 [==============================] - 3s 24us/step - loss: 8585.3422 - val_loss: 9072.1216
Epoch 140/200
143926/143926 [==============================] - 3s 24us/step - loss: 8543.1754 - val_loss: 9068.0645
Epoch 141/200
143926/143926 [==============================] - 3s 24us/step - loss: 8530.1256 - val_loss: 9046.5236
Epoch 142/200
143926/143926 [==============================] - 3s 24us/step - loss: 8545.7378 - val_loss: 9074.4763
Epoch 143/200
143926/143926 [==============================] - 3s 24us/step - loss: 8560.2830 - val_loss: 9069.5130
Epoch 144/200
143926/143926 [==============================] - 3s 24us/step - loss: 8493.2675 - val_loss: 9053.1041
Epoch 145/200
143926/143926 [==============================] - 3s 24us/step - loss: 8497.1119 - val_loss: 9050.6635
Epoch 146/200
143926/143926 [==============================] - 3s 24us/step - loss: 8546.7575 - val_loss: 9090.0294
Epoch 147/200
143926/143926 [==============================] - 3s 24us/step - loss: 8514.6808 - val_loss: 9057.9897
Epoch 148/200
143926/143926 [==============================] - 3s 24us/step - loss: 8584.9762 - val_loss: 9119.7746
Epoch 149/200
143926/143926 [==============================] - 3s 24us/step - loss: 8501.0244 - val_loss: 9105.2255
Epoch 150/200
143926/143926 [==============================] - 3s 24us/step - loss: 8491.7223 - val_loss: 9043.7032
Epoch 151/200
143926/143926 [==============================] - 3s 24us/step - loss: 8564.4880 - val_loss: 9100.7316
Epoch 152/200
143926/143926 [==============================] - 3s 24us/step - loss: 8508.8122 - val_loss: 9056.6645
Epoch 153/200
143926/143926 [==============================] - 3s 24us/step - loss: 8502.1887 - val_loss: 9062.1385
Epoch 154/200
143926/143926 [==============================] - 3s 24us/step - loss: 8496.7150 - val_loss: 9121.0506
Epoch 155/200
143926/143926 [==============================] - 3s 24us/step - loss: 8487.6719 - val_loss: 9063.7602
Epoch 156/200
143926/143926 [==============================] - 3s 24us/step - loss: 8490.5891 - val_loss: 9075.5730
Epoch 157/200
143926/143926 [==============================] - 3s 24us/step - loss: 8461.3772 - val_loss: 9047.7843
Epoch 158/200
143926/143926 [==============================] - 3s 24us/step - loss: 8509.3924 - val_loss: 9091.7694
Epoch 159/200
143926/143926 [==============================] - 3s 24us/step - loss: 8425.1503 - val_loss: 9008.2207
Epoch 160/200
143926/143926 [==============================] - 4s 24us/step - loss: 8487.2328 - val_loss: 9062.2688
Epoch 161/200
143926/143926 [==============================] - 3s 24us/step - loss: 8460.7087 - val_loss: 9044.4996
Epoch 162/200
143926/143926 [==============================] - 3s 24us/step - loss: 8469.2594 - val_loss: 9032.7098
Epoch 163/200
143926/143926 [==============================] - 3s 24us/step - loss: 8482.2895 - val_loss: 9084.8979
Epoch 164/200
143926/143926 [==============================] - 3s 24us/step - loss: 8433.9647 - val_loss: 9044.3014
Epoch 165/200
143926/143926 [==============================] - 3s 24us/step - loss: 8493.3408 - val_loss: 9081.3475
Epoch 166/200
143926/143926 [==============================] - 3s 24us/step - loss: 8469.6457 - val_loss: 9031.0330
Epoch 167/200
143926/143926 [==============================] - 3s 24us/step - loss: 8455.4869 - val_loss: 9081.2707
Epoch 168/200
143926/143926 [==============================] - 3s 24us/step - loss: 8470.6491 - val_loss: 9077.5003
Epoch 169/200
143926/143926 [==============================] - 3s 24us/step - loss: 8429.8011 - val_loss: 9062.3388
Epoch 170/200
143926/143926 [==============================] - 3s 24us/step - loss: 8471.1372 - val_loss: 9160.1323
Epoch 171/200
143926/143926 [==============================] - 3s 24us/step - loss: 8523.8198 - val_loss: 9136.0101
Epoch 172/200
143926/143926 [==============================] - 3s 24us/step - loss: 8479.8402 - val_loss: 9110.2092
Epoch 173/200
143926/143926 [==============================] - 3s 24us/step - loss: 8469.5049 - val_loss: 9098.3627
Epoch 174/200
143926/143926 [==============================] - 3s 24us/step - loss: 8455.2946 - val_loss: 9047.6041
Epoch 175/200
143926/143926 [==============================] - 4s 25us/step - loss: 8464.1472 - val_loss: 9080.5953
Epoch 176/200
143926/143926 [==============================] - 3s 24us/step - loss: 8454.9124 - val_loss: 9074.4317
Epoch 177/200
143926/143926 [==============================] - 3s 24us/step - loss: 8432.7512 - val_loss: 9088.8029
Epoch 178/200
143926/143926 [==============================] - 3s 24us/step - loss: 8417.6899 - val_loss: 9045.7693
Epoch 179/200
143926/143926 [==============================] - 3s 24us/step - loss: 8440.8457 - val_loss: 9092.0669
Epoch 180/200
143926/143926 [==============================] - 4s 24us/step - loss: 8426.3014 - val_loss: 9079.6408
Epoch 181/200
143926/143926 [==============================] - 4s 25us/step - loss: 8405.1799 - val_loss: 9030.5912
Epoch 182/200
143926/143926 [==============================] - 4s 25us/step - loss: 8430.6201 - val_loss: 9082.2601
Epoch 183/200
143926/143926 [==============================] - 3s 24us/step - loss: 8428.9502 - val_loss: 9053.9205
Epoch 184/200
143926/143926 [==============================] - 4s 25us/step - loss: 8367.1107 - val_loss: 9001.6991
Epoch 185/200
143926/143926 [==============================] - 4s 25us/step - loss: 8414.7004 - val_loss: 9067.8541
Epoch 186/200
143926/143926 [==============================] - 4s 26us/step - loss: 8429.7192 - val_loss: 9091.6395
Epoch 187/200
143926/143926 [==============================] - 3s 24us/step - loss: 8434.9814 - val_loss: 9113.7864
Epoch 188/200
143926/143926 [==============================] - 3s 24us/step - loss: 8390.1112 - val_loss: 9043.9799
Epoch 189/200
143926/143926 [==============================] - 3s 24us/step - loss: 8404.5018 - val_loss: 9025.1843
Epoch 190/200
143926/143926 [==============================] - 3s 24us/step - loss: 8431.6646 - val_loss: 9035.1661
Epoch 191/200
143926/143926 [==============================] - 4s 25us/step - loss: 8381.3880 - val_loss: 8985.3523
Epoch 192/200
143926/143926 [==============================] - 4s 24us/step - loss: 8440.7342 - val_loss: 9063.4542
Epoch 193/200
143926/143926 [==============================] - 4s 24us/step - loss: 8455.7199 - val_loss: 9079.2889
Epoch 194/200
143926/143926 [==============================] - 4s 25us/step - loss: 8408.1158 - val_loss: 9072.0795
Epoch 195/200
143926/143926 [==============================] - 4s 25us/step - loss: 8399.0514 - val_loss: 9063.6314
Epoch 196/200
143926/143926 [==============================] - 4s 24us/step - loss: 8414.2339 - val_loss: 9069.2820
Epoch 197/200
143926/143926 [==============================] - 4s 25us/step - loss: 8391.1428 - val_loss: 9105.3412
Epoch 198/200
143926/143926 [==============================] - 4s 24us/step - loss: 8365.1801 - val_loss: 9068.1260
Epoch 199/200
143926/143926 [==============================] - 4s 24us/step - loss: 8387.8899 - val_loss: 9103.9134
Epoch 200/200
143926/143926 [==============================] - 4s 24us/step - loss: 8425.9067 - val_loss: 9084.0499
Tempo total de treinamento do modelo (s): 697.0

In [24]:
fc_model_1 = load_model(model_path)
pred_fc_1 = fc_model_1.predict(X_test1).reshape(-1)
mse_loss_fc_1 = mse_loss(pred_fc_1, y_test1)
mae_loss_fc_1 = mae_loss(pred_fc_1, y_test1)
print('MSE no conjunto de teste: ', mse_loss_fc_1)
print('MAE no conjunto de teste:', mae_loss_fc_1)


MSE no conjunto de teste:  9527.142491655815
MAE no conjunto de teste: 50.9953678686227

In [25]:
train_loss = hist_fc_1.history['loss']
val_loss = hist_fc_1.history['val_loss']
def plot_losses(train_loss, val_loss):
    plt.rcParams["figure.figsize"] = [24,10]
    plt.title('MSE dos conjuntos de treino e teste - Resid. 1')
    plt.plot( range(len(train_loss)), train_loss, color = 'b', alpha = 0.6, label='loss (treino)' )
    plt.plot( range(len( val_loss )), val_loss, color = 'r', alpha = 0.6, label='loss (validação)' )
    plt.xlabel( 'época' )
    plt.ylabel( 'loss' )
    plt.legend()

plot_losses(train_loss, val_loss)



In [26]:
plot_each_app(df1_test, dates[1][17:], pred_fc_1, y_test1, 
              'Rede Neural FC: Real e Previsão nos 6 dias do Conjunto de Teste da Resid. 1', look_back = 50)


Aplicando o Modelo nos dados da Residência 2


In [32]:
y_pred_fc_2 = fc_model_1.predict(X_2).reshape(-1)
mse_fc_2 = mse_loss(y_pred_fc_2, y_2)
mae_fc_2 = mae_loss(y_pred_fc_2, y_2)
print('MSE no conjunto de teste: ', mse_fc_2)
print('MAE no conjunto de teste: ', mae_fc_2)


MSE no conjunto de teste:  12929.89783431528
MAE no conjunto de teste:  74.76916460887222

In [33]:
plot_each_app(df[2], dates[2], y_pred_fc_2, y_2, 'Modelo RNA-FC para o Refrigerador: treinado na Resid. 1, prevendo na Resid. 2')


Utilizando 50 registros de consumos para retreinar o modelo, e prever o consumo de energia de cada aparelho


In [35]:
def process_data(df, dates, x_features, y_features, look_back = 50):
    i = 0
    for date in dates:
        data = df.ix[date]
        len_data = data.shape[0]
        x = np.array([data[x_features].values[i:i+look_back] 
                      for i in range(len_data - look_back) ]).reshape(-1,look_back, 2)
        y = data[y_features].values[look_back:,:]
        if i == 0:
            X = x
            Y = y
        else:
            X = np.append(X, x, axis=0)
            Y = np.append(Y, y, axis=0)
        i += 1
    return X,Y

In [36]:
start = time.time()
X_train, y_train = process_data(df[1], dates[1][:17], ['mains_1','mains_2'], df[1].columns.values[2:])
X_test, y_test = process_data(df[1], dates[1][17:], ['mains_1','mains_2'], df[1].columns.values[2:])
print('Tempo de execução total (s): ', time.time() - start)
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)


Tempo de execução total (s):  458.55007314682007
(318841, 50, 2) (318841, 18) (86757, 50, 2) (86757, 18)

Aplicando a RNA-FC retreinada com os últimos 50 consumos medidos para prever o consumo do refrigerador


In [37]:
fc_model = build_fc_model([100, 256, 512, 1024, 1])


_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_9 (Dense)              (None, 256)               25856     
_________________________________________________________________
dropout_9 (Dropout)          (None, 256)               0         
_________________________________________________________________
activation_7 (Activation)    (None, 256)               0         
_________________________________________________________________
dense_10 (Dense)             (None, 512)               131584    
_________________________________________________________________
dropout_10 (Dropout)         (None, 512)               0         
_________________________________________________________________
activation_8 (Activation)    (None, 512)               0         
_________________________________________________________________
dense_11 (Dense)             (None, 1024)              525312    
_________________________________________________________________
dropout_11 (Dropout)         (None, 1024)              0         
_________________________________________________________________
activation_9 (Activation)    (None, 1024)              0         
_________________________________________________________________
dense_12 (Dense)             (None, 1)                 1025      
_________________________________________________________________
dropout_12 (Dropout)         (None, 1)                 0         
=================================================================
Total params: 683,777
Trainable params: 683,777
Non-trainable params: 0
_________________________________________________________________

In [38]:
# Let's flatten data to feed into fc model
X_train_fc = X_train.reshape(-1, 100)
y_train_fc = y_train[:,2]
print(X_train_fc.shape, y_train_fc.shape)


(318841, 100) (318841,)

In [39]:
adam = Adam(lr = 1e-5)
fc_model.compile(loss='mean_squared_error', optimizer=adam)
start = time.time()
checkpointer = ModelCheckpoint(filepath=model_path, verbose=0, save_best_only=True)
hist_fc2 = fc_model.fit( X_train_fc, y_train_fc,
                    batch_size=512, verbose=1, nb_epoch= 200,
                    validation_split=0.33, callbacks=[checkpointer])
print('Tempo total de execução (retreino - em segundos): ', time.time() - start)


Train on 213623 samples, validate on 105218 samples
Epoch 1/200
213623/213623 [==============================] - 6s 30us/step - loss: 87789.8510 - val_loss: 10420.6018
Epoch 2/200
213623/213623 [==============================] - 5s 25us/step - loss: 63785.7081 - val_loss: 9308.3837
Epoch 3/200
213623/213623 [==============================] - 5s 25us/step - loss: 53114.8863 - val_loss: 9274.0766
Epoch 4/200
213623/213623 [==============================] - 5s 25us/step - loss: 43710.0313 - val_loss: 8659.9941
Epoch 5/200
213623/213623 [==============================] - 5s 25us/step - loss: 34633.6412 - val_loss: 8558.3181
Epoch 6/200
213623/213623 [==============================] - 5s 25us/step - loss: 30340.4641 - val_loss: 8338.6227
Epoch 7/200
213623/213623 [==============================] - 5s 25us/step - loss: 27316.5811 - val_loss: 8437.7471
Epoch 8/200
213623/213623 [==============================] - 5s 25us/step - loss: 23837.9340 - val_loss: 8468.9686
Epoch 9/200
213623/213623 [==============================] - 5s 25us/step - loss: 21618.9405 - val_loss: 8299.9158
Epoch 10/200
213623/213623 [==============================] - 5s 26us/step - loss: 19213.5962 - val_loss: 8572.3730
Epoch 11/200
213623/213623 [==============================] - 5s 25us/step - loss: 17950.3481 - val_loss: 8607.7185
Epoch 12/200
213623/213623 [==============================] - 5s 25us/step - loss: 16580.0951 - val_loss: 8566.8838
Epoch 13/200
213623/213623 [==============================] - 5s 25us/step - loss: 15589.9526 - val_loss: 8421.5702
Epoch 14/200
213623/213623 [==============================] - 5s 25us/step - loss: 14966.8231 - val_loss: 8481.0729
Epoch 15/200
213623/213623 [==============================] - 6s 26us/step - loss: 13840.0633 - val_loss: 8397.0684
Epoch 16/200
213623/213623 [==============================] - 7s 31us/step - loss: 13473.1459 - val_loss: 8499.1160
Epoch 17/200
213623/213623 [==============================] - 6s 27us/step - loss: 12863.8743 - val_loss: 8514.4467
Epoch 18/200
213623/213623 [==============================] - 6s 26us/step - loss: 12389.1448 - val_loss: 8490.4389
Epoch 19/200
213623/213623 [==============================] - 5s 25us/step - loss: 11905.9673 - val_loss: 8487.3910
Epoch 20/200
213623/213623 [==============================] - 5s 25us/step - loss: 11594.4342 - val_loss: 8502.7344
Epoch 21/200
213623/213623 [==============================] - 5s 26us/step - loss: 11298.5388 - val_loss: 8512.1337
Epoch 22/200
213623/213623 [==============================] - 5s 25us/step - loss: 10888.5745 - val_loss: 8405.6651
Epoch 23/200
213623/213623 [==============================] - 6s 26us/step - loss: 10889.2966 - val_loss: 8452.6433
Epoch 24/200
213623/213623 [==============================] - 5s 26us/step - loss: 10567.3613 - val_loss: 8388.5478
Epoch 25/200
213623/213623 [==============================] - 5s 25us/step - loss: 10457.2869 - val_loss: 8378.6773
Epoch 26/200
213623/213623 [==============================] - 6s 26us/step - loss: 10291.1418 - val_loss: 8398.6289
Epoch 27/200
213623/213623 [==============================] - 5s 25us/step - loss: 10124.4245 - val_loss: 8291.2274
Epoch 28/200
213623/213623 [==============================] - 6s 27us/step - loss: 10059.5400 - val_loss: 8369.5696
Epoch 29/200
213623/213623 [==============================] - 6s 26us/step - loss: 9913.4336 - val_loss: 8281.3313
Epoch 30/200
213623/213623 [==============================] - 5s 25us/step - loss: 9824.2357 - val_loss: 8284.4517
Epoch 31/200
213623/213623 [==============================] - 5s 26us/step - loss: 9734.2953 - val_loss: 8223.5704
Epoch 32/200
213623/213623 [==============================] - 6s 26us/step - loss: 9700.8670 - val_loss: 8103.9190
Epoch 33/200
213623/213623 [==============================] - 6s 26us/step - loss: 9567.1476 - val_loss: 8128.1646
Epoch 34/200
213623/213623 [==============================] - 5s 25us/step - loss: 9485.0350 - val_loss: 8065.4055
Epoch 35/200
213623/213623 [==============================] - 5s 25us/step - loss: 9393.1859 - val_loss: 8002.9649
Epoch 36/200
213623/213623 [==============================] - 5s 26us/step - loss: 9360.8017 - val_loss: 8015.9464
Epoch 37/200
213623/213623 [==============================] - 6s 26us/step - loss: 9263.4531 - val_loss: 7982.3656
Epoch 38/200
213623/213623 [==============================] - 6s 26us/step - loss: 9177.7113 - val_loss: 7805.8957
Epoch 39/200
213623/213623 [==============================] - 5s 25us/step - loss: 9147.0956 - val_loss: 7870.3621
Epoch 40/200
213623/213623 [==============================] - 5s 25us/step - loss: 9175.7346 - val_loss: 7864.0415
Epoch 41/200
213623/213623 [==============================] - 5s 26us/step - loss: 9078.4159 - val_loss: 7826.6185
Epoch 42/200
213623/213623 [==============================] - 5s 25us/step - loss: 9012.0412 - val_loss: 7692.4737
Epoch 43/200
213623/213623 [==============================] - 5s 25us/step - loss: 8965.4440 - val_loss: 7721.5198
Epoch 44/200
213623/213623 [==============================] - 5s 25us/step - loss: 8950.3264 - val_loss: 7754.5545
Epoch 45/200
213623/213623 [==============================] - 5s 25us/step - loss: 8892.1241 - val_loss: 7620.8745
Epoch 46/200
213623/213623 [==============================] - 5s 25us/step - loss: 8826.5015 - val_loss: 7565.5249
Epoch 47/200
213623/213623 [==============================] - 5s 25us/step - loss: 8764.4092 - val_loss: 7387.8554
Epoch 48/200
213623/213623 [==============================] - 5s 25us/step - loss: 8838.9372 - val_loss: 7448.7401
Epoch 49/200
213623/213623 [==============================] - 5s 25us/step - loss: 8741.4351 - val_loss: 7485.5331
Epoch 50/200
213623/213623 [==============================] - 5s 25us/step - loss: 8677.7882 - val_loss: 7435.8703
Epoch 51/200
213623/213623 [==============================] - 5s 25us/step - loss: 8691.7275 - val_loss: 7423.4133
Epoch 52/200
213623/213623 [==============================] - 5s 25us/step - loss: 8689.0436 - val_loss: 7403.5930
Epoch 53/200
213623/213623 [==============================] - 5s 25us/step - loss: 8699.9728 - val_loss: 7510.7780
Epoch 54/200
213623/213623 [==============================] - 5s 25us/step - loss: 8618.0869 - val_loss: 7479.6653
Epoch 55/200
213623/213623 [==============================] - 5s 25us/step - loss: 8616.5486 - val_loss: 7400.5219
Epoch 56/200
213623/213623 [==============================] - 5s 25us/step - loss: 8560.7785 - val_loss: 7333.3128
Epoch 57/200
213623/213623 [==============================] - 5s 25us/step - loss: 8543.6000 - val_loss: 7283.4099
Epoch 58/200
213623/213623 [==============================] - 5s 25us/step - loss: 8506.4160 - val_loss: 7259.8672
Epoch 59/200
213623/213623 [==============================] - 5s 25us/step - loss: 8524.9033 - val_loss: 7387.5427
Epoch 60/200
213623/213623 [==============================] - 5s 25us/step - loss: 8476.1309 - val_loss: 7284.2558
Epoch 61/200
213623/213623 [==============================] - 5s 25us/step - loss: 8458.8338 - val_loss: 7191.2960
Epoch 62/200
213623/213623 [==============================] - 5s 25us/step - loss: 8455.8584 - val_loss: 7312.0473
Epoch 63/200
213623/213623 [==============================] - 5s 25us/step - loss: 8448.3986 - val_loss: 7267.5927
Epoch 64/200
213623/213623 [==============================] - 5s 25us/step - loss: 8436.1950 - val_loss: 7244.1497
Epoch 65/200
213623/213623 [==============================] - 5s 25us/step - loss: 8408.7624 - val_loss: 7337.8433
Epoch 66/200
213623/213623 [==============================] - 5s 25us/step - loss: 8399.1066 - val_loss: 7171.3174
Epoch 67/200
213623/213623 [==============================] - 5s 25us/step - loss: 8403.5755 - val_loss: 7359.1339
Epoch 68/200
213623/213623 [==============================] - 5s 25us/step - loss: 8376.3125 - val_loss: 7234.7637
Epoch 69/200
213623/213623 [==============================] - 5s 25us/step - loss: 8348.6651 - val_loss: 7459.6479
Epoch 70/200
213623/213623 [==============================] - 5s 25us/step - loss: 8327.9323 - val_loss: 7266.7956
Epoch 71/200
213623/213623 [==============================] - 5s 25us/step - loss: 8377.9162 - val_loss: 7331.3090
Epoch 72/200
213623/213623 [==============================] - 5s 25us/step - loss: 8305.6303 - val_loss: 7206.3114
Epoch 73/200
213623/213623 [==============================] - 5s 25us/step - loss: 8282.2779 - val_loss: 7240.5088
Epoch 74/200
213623/213623 [==============================] - 5s 25us/step - loss: 8282.5659 - val_loss: 7301.4564
Epoch 75/200
213623/213623 [==============================] - 5s 25us/step - loss: 8301.2796 - val_loss: 7359.7477
Epoch 76/200
213623/213623 [==============================] - 5s 25us/step - loss: 8279.5736 - val_loss: 7227.9965
Epoch 77/200
213623/213623 [==============================] - 5s 25us/step - loss: 8268.8150 - val_loss: 7257.2276
Epoch 78/200
213623/213623 [==============================] - 5s 25us/step - loss: 8205.4425 - val_loss: 7173.3484
Epoch 79/200
213623/213623 [==============================] - 5s 25us/step - loss: 8255.3310 - val_loss: 7231.2689
Epoch 80/200
213623/213623 [==============================] - 5s 25us/step - loss: 8288.1551 - val_loss: 7280.2827
Epoch 81/200
213623/213623 [==============================] - 5s 25us/step - loss: 8263.6816 - val_loss: 7287.3178
Epoch 82/200
213623/213623 [==============================] - 5s 25us/step - loss: 8201.9258 - val_loss: 7128.1445
Epoch 83/200
213623/213623 [==============================] - 5s 25us/step - loss: 8214.1373 - val_loss: 7215.7626
Epoch 84/200
213623/213623 [==============================] - 5s 25us/step - loss: 8194.4737 - val_loss: 7221.7403
Epoch 85/200
213623/213623 [==============================] - 5s 25us/step - loss: 8181.6956 - val_loss: 7214.4341
Epoch 86/200
213623/213623 [==============================] - 5s 25us/step - loss: 8196.6226 - val_loss: 7451.6029
Epoch 87/200
213623/213623 [==============================] - 5s 25us/step - loss: 8146.9361 - val_loss: 7194.3198
Epoch 88/200
213623/213623 [==============================] - 5s 25us/step - loss: 8171.9742 - val_loss: 7183.0367
Epoch 89/200
213623/213623 [==============================] - 5s 25us/step - loss: 8181.3374 - val_loss: 7310.0508
Epoch 90/200
213623/213623 [==============================] - 5s 25us/step - loss: 8132.3004 - val_loss: 7250.6811
Epoch 91/200
213623/213623 [==============================] - 5s 25us/step - loss: 8139.5195 - val_loss: 7237.8919
Epoch 92/200
213623/213623 [==============================] - 5s 25us/step - loss: 8126.6552 - val_loss: 7214.5982
Epoch 93/200
213623/213623 [==============================] - 5s 25us/step - loss: 8162.6165 - val_loss: 7318.9062
Epoch 94/200
213623/213623 [==============================] - 5s 25us/step - loss: 8114.2057 - val_loss: 7260.7261
Epoch 95/200
213623/213623 [==============================] - 5s 25us/step - loss: 8141.9888 - val_loss: 7333.5415
Epoch 96/200
213623/213623 [==============================] - 5s 25us/step - loss: 8137.2777 - val_loss: 7218.5312
Epoch 97/200
213623/213623 [==============================] - 5s 25us/step - loss: 8128.7324 - val_loss: 7384.6498
Epoch 98/200
213623/213623 [==============================] - 5s 25us/step - loss: 8075.5989 - val_loss: 7352.1138
Epoch 99/200
213623/213623 [==============================] - 5s 25us/step - loss: 8094.1553 - val_loss: 7368.0682
Epoch 100/200
213623/213623 [==============================] - 5s 25us/step - loss: 8117.6030 - val_loss: 7496.0041
Epoch 101/200
213623/213623 [==============================] - 5s 25us/step - loss: 8114.6866 - val_loss: 7258.8254
Epoch 102/200
213623/213623 [==============================] - 5s 25us/step - loss: 8087.7121 - val_loss: 7350.2439
Epoch 103/200
213623/213623 [==============================] - 5s 25us/step - loss: 8106.2167 - val_loss: 7340.7060
Epoch 104/200
213623/213623 [==============================] - 5s 25us/step - loss: 8095.4014 - val_loss: 7353.1614
Epoch 105/200
213623/213623 [==============================] - 5s 25us/step - loss: 8073.8793 - val_loss: 7283.6552
Epoch 106/200
213623/213623 [==============================] - 5s 25us/step - loss: 8084.3964 - val_loss: 7290.3898
Epoch 107/200
213623/213623 [==============================] - 5s 25us/step - loss: 8043.8384 - val_loss: 7312.5959
Epoch 108/200
213623/213623 [==============================] - 5s 25us/step - loss: 8069.1726 - val_loss: 7315.6262
Epoch 109/200
213623/213623 [==============================] - 5s 25us/step - loss: 8059.0935 - val_loss: 7551.0192
Epoch 110/200
213623/213623 [==============================] - 5s 25us/step - loss: 8038.6606 - val_loss: 7326.9088
Epoch 111/200
213623/213623 [==============================] - 5s 25us/step - loss: 8041.2984 - val_loss: 7319.9764
Epoch 112/200
213623/213623 [==============================] - 5s 25us/step - loss: 8091.1596 - val_loss: 7490.1940
Epoch 113/200
213623/213623 [==============================] - 5s 25us/step - loss: 8001.2876 - val_loss: 7411.9570
Epoch 114/200
213623/213623 [==============================] - 5s 25us/step - loss: 8048.1889 - val_loss: 7483.5312
Epoch 115/200
213623/213623 [==============================] - 5s 25us/step - loss: 7994.2933 - val_loss: 7250.6474
Epoch 116/200
213623/213623 [==============================] - 6s 26us/step - loss: 7969.0338 - val_loss: 7358.2130
Epoch 117/200
213623/213623 [==============================] - 5s 25us/step - loss: 8002.2006 - val_loss: 7520.1170
Epoch 118/200
213623/213623 [==============================] - 5s 25us/step - loss: 8054.0042 - val_loss: 7366.2312
Epoch 119/200
213623/213623 [==============================] - 5s 25us/step - loss: 8027.3734 - val_loss: 7466.4517
Epoch 120/200
213623/213623 [==============================] - 5s 25us/step - loss: 7997.7996 - val_loss: 7345.2798
Epoch 121/200
213623/213623 [==============================] - 5s 25us/step - loss: 8021.5766 - val_loss: 7345.5497
Epoch 122/200
213623/213623 [==============================] - 5s 25us/step - loss: 8010.1069 - val_loss: 7396.9727
Epoch 123/200
213623/213623 [==============================] - 5s 25us/step - loss: 8007.5787 - val_loss: 7654.0028
Epoch 124/200
213623/213623 [==============================] - 5s 25us/step - loss: 8012.2598 - val_loss: 7517.3420
Epoch 125/200
213623/213623 [==============================] - 5s 25us/step - loss: 7986.2026 - val_loss: 7324.7072
Epoch 126/200
213623/213623 [==============================] - 5s 25us/step - loss: 7959.7434 - val_loss: 7324.4298
Epoch 127/200
213623/213623 [==============================] - 5s 25us/step - loss: 7978.2035 - val_loss: 7312.7954
Epoch 128/200
213623/213623 [==============================] - 5s 25us/step - loss: 7989.0541 - val_loss: 7531.5174
Epoch 129/200
213623/213623 [==============================] - 5s 25us/step - loss: 7970.6163 - val_loss: 7645.3411
Epoch 130/200
213623/213623 [==============================] - 5s 25us/step - loss: 7933.7314 - val_loss: 7439.3508
Epoch 131/200
213623/213623 [==============================] - 5s 25us/step - loss: 8000.5132 - val_loss: 7434.2134
Epoch 132/200
213623/213623 [==============================] - 5s 25us/step - loss: 7923.4032 - val_loss: 7540.9887
Epoch 133/200
213623/213623 [==============================] - 5s 25us/step - loss: 7989.4880 - val_loss: 7506.4902
Epoch 134/200
213623/213623 [==============================] - 5s 25us/step - loss: 7962.2582 - val_loss: 7524.5654
Epoch 135/200
213623/213623 [==============================] - 5s 25us/step - loss: 7952.1463 - val_loss: 7617.0777
Epoch 136/200
213623/213623 [==============================] - 5s 25us/step - loss: 7955.3291 - val_loss: 7454.2154
Epoch 137/200
213623/213623 [==============================] - 5s 25us/step - loss: 7970.1686 - val_loss: 7546.0788
Epoch 138/200
213623/213623 [==============================] - 5s 25us/step - loss: 7970.6921 - val_loss: 7614.3222
Epoch 139/200
213623/213623 [==============================] - 5s 26us/step - loss: 7945.3992 - val_loss: 7512.0046
Epoch 140/200
213623/213623 [==============================] - 5s 25us/step - loss: 7926.8102 - val_loss: 7500.7896
Epoch 141/200
213623/213623 [==============================] - 5s 25us/step - loss: 7923.3933 - val_loss: 7502.3714
Epoch 142/200
213623/213623 [==============================] - 5s 25us/step - loss: 7931.6194 - val_loss: 7549.2128
Epoch 143/200
213623/213623 [==============================] - 5s 25us/step - loss: 7914.6008 - val_loss: 7446.7852
Epoch 144/200
213623/213623 [==============================] - 5s 25us/step - loss: 7919.4092 - val_loss: 7474.6231
Epoch 145/200
213623/213623 [==============================] - 5s 25us/step - loss: 7931.1947 - val_loss: 7439.5127
Epoch 146/200
213623/213623 [==============================] - 5s 25us/step - loss: 7943.3032 - val_loss: 7652.7570
Epoch 147/200
213623/213623 [==============================] - 5s 25us/step - loss: 7896.0353 - val_loss: 7476.8070
Epoch 148/200
213623/213623 [==============================] - 5s 25us/step - loss: 7830.4857 - val_loss: 7507.3014
Epoch 149/200
213623/213623 [==============================] - 5s 25us/step - loss: 7896.6814 - val_loss: 7718.7147
Epoch 150/200
213623/213623 [==============================] - 5s 25us/step - loss: 7912.3353 - val_loss: 7515.8324
Epoch 151/200
213623/213623 [==============================] - 5s 24us/step - loss: 7905.1874 - val_loss: 7716.7835
Epoch 152/200
213623/213623 [==============================] - 5s 25us/step - loss: 7874.6591 - val_loss: 7637.3875
Epoch 153/200
213623/213623 [==============================] - 5s 25us/step - loss: 7835.8517 - val_loss: 7532.1870
Epoch 154/200
213623/213623 [==============================] - 5s 25us/step - loss: 7829.0895 - val_loss: 7537.1308
Epoch 155/200
213623/213623 [==============================] - 5s 24us/step - loss: 7873.8448 - val_loss: 7475.9656
Epoch 156/200
213623/213623 [==============================] - 5s 24us/step - loss: 7889.6620 - val_loss: 7637.5914
Epoch 157/200
213623/213623 [==============================] - 5s 24us/step - loss: 7916.8732 - val_loss: 7594.2750
Epoch 158/200
213623/213623 [==============================] - 5s 24us/step - loss: 7868.7855 - val_loss: 7708.0746
Epoch 159/200
213623/213623 [==============================] - 5s 24us/step - loss: 7889.6958 - val_loss: 7692.1481
Epoch 160/200
213623/213623 [==============================] - 5s 24us/step - loss: 7803.1588 - val_loss: 7729.6898
Epoch 161/200
213623/213623 [==============================] - 5s 25us/step - loss: 7852.0839 - val_loss: 7511.8771
Epoch 162/200
213623/213623 [==============================] - 5s 25us/step - loss: 7850.2260 - val_loss: 7533.4011
Epoch 163/200
213623/213623 [==============================] - 5s 25us/step - loss: 7852.5244 - val_loss: 7681.5289
Epoch 164/200
213623/213623 [==============================] - 5s 24us/step - loss: 7867.2170 - val_loss: 7700.1442
Epoch 165/200
213623/213623 [==============================] - 5s 24us/step - loss: 7845.9720 - val_loss: 7818.0526
Epoch 166/200
213623/213623 [==============================] - 5s 24us/step - loss: 7839.7237 - val_loss: 7636.6435
Epoch 167/200
213623/213623 [==============================] - 5s 24us/step - loss: 7796.1367 - val_loss: 7684.2012
Epoch 168/200
213623/213623 [==============================] - 5s 24us/step - loss: 7887.4657 - val_loss: 7834.2180
Epoch 169/200
213623/213623 [==============================] - 5s 24us/step - loss: 7821.3344 - val_loss: 7707.8889
Epoch 170/200
213623/213623 [==============================] - 5s 24us/step - loss: 7836.4650 - val_loss: 7568.6512
Epoch 171/200
213623/213623 [==============================] - 5s 24us/step - loss: 7805.0555 - val_loss: 7709.5655
Epoch 172/200
213623/213623 [==============================] - 5s 25us/step - loss: 7817.5075 - val_loss: 7885.0695
Epoch 173/200
213623/213623 [==============================] - 5s 25us/step - loss: 7827.5260 - val_loss: 7652.8748
Epoch 174/200
213623/213623 [==============================] - 5s 25us/step - loss: 7836.8938 - val_loss: 7819.3192
Epoch 175/200
213623/213623 [==============================] - 5s 25us/step - loss: 7801.3705 - val_loss: 7690.2961
Epoch 176/200
213623/213623 [==============================] - 5s 25us/step - loss: 7760.6016 - val_loss: 7821.9822
Epoch 177/200
213623/213623 [==============================] - 5s 25us/step - loss: 7803.1027 - val_loss: 7750.8629
Epoch 178/200
213623/213623 [==============================] - 5s 25us/step - loss: 7829.9456 - val_loss: 7602.0906
Epoch 179/200
213623/213623 [==============================] - 5s 25us/step - loss: 7828.6279 - val_loss: 7734.1176
Epoch 180/200
213623/213623 [==============================] - 5s 24us/step - loss: 7784.1868 - val_loss: 7680.8311
Epoch 181/200
213623/213623 [==============================] - 5s 24us/step - loss: 7827.5779 - val_loss: 7666.9158
Epoch 182/200
213623/213623 [==============================] - 5s 25us/step - loss: 7780.5861 - val_loss: 7571.3294
Epoch 183/200
213623/213623 [==============================] - 5s 25us/step - loss: 7797.5276 - val_loss: 7559.9920
Epoch 184/200
213623/213623 [==============================] - 5s 25us/step - loss: 7788.3267 - val_loss: 7569.8217
Epoch 185/200
213623/213623 [==============================] - 5s 25us/step - loss: 7791.0536 - val_loss: 7699.2577
Epoch 186/200
213623/213623 [==============================] - 5s 25us/step - loss: 7785.3706 - val_loss: 7749.2264
Epoch 187/200
213623/213623 [==============================] - 5s 24us/step - loss: 7779.5232 - val_loss: 7577.2573
Epoch 188/200
213623/213623 [==============================] - 5s 24us/step - loss: 7789.6800 - val_loss: 7629.9796
Epoch 189/200
213623/213623 [==============================] - 5s 25us/step - loss: 7781.2789 - val_loss: 7652.5099
Epoch 190/200
213623/213623 [==============================] - 5s 25us/step - loss: 7820.8558 - val_loss: 7713.4654
Epoch 191/200
213623/213623 [==============================] - 5s 25us/step - loss: 7768.6453 - val_loss: 7621.8802
Epoch 192/200
213623/213623 [==============================] - 5s 25us/step - loss: 7793.4896 - val_loss: 8004.0407
Epoch 193/200
213623/213623 [==============================] - 5s 25us/step - loss: 7793.6967 - val_loss: 7667.4426
Epoch 194/200
213623/213623 [==============================] - 5s 25us/step - loss: 7743.4600 - val_loss: 7650.9675
Epoch 195/200
213623/213623 [==============================] - 5s 24us/step - loss: 7771.3005 - val_loss: 7636.1241
Epoch 196/200
213623/213623 [==============================] - 5s 25us/step - loss: 7759.0002 - val_loss: 7590.1601
Epoch 197/200
213623/213623 [==============================] - 5s 25us/step - loss: 7780.7666 - val_loss: 7727.8993
Epoch 198/200
213623/213623 [==============================] - 5s 25us/step - loss: 7752.4895 - val_loss: 7813.0093
Epoch 199/200
213623/213623 [==============================] - 5s 25us/step - loss: 7730.3046 - val_loss: 7623.7990
Epoch 200/200
213623/213623 [==============================] - 5s 25us/step - loss: 7774.0306 - val_loss: 7668.8883
Tempo total de execução (retreino - em segundos):  1066.7300658226013

In [40]:
train_loss = hist_fc2.history['loss']
val_loss = hist_fc2.history['val_loss']

plot_losses(train_loss, val_loss)



In [41]:
fc_model = load_model(model_path)
pred_fc = fc_model.predict(X_test.reshape(-1, 100)).reshape(-1)
print(pred_fc.shape)


(86757,)

In [42]:
mse_loss_fc = mse_loss(pred_fc, y_test[:,2])
mae_loss_fc = mae_loss(pred_fc, y_test[:,2])
print('MSE no conjunto de teste: ', mse_loss_fc)
print('MAE no conjunto de teste: ', mae_loss_fc)


MSE no conjunto de teste:  8204.637069890681
MAE no conjunto de teste:  46.315712212548895

In [43]:
plot_each_app(df1_test, dates[1][17:], pred_fc, y_test[:,2], 
              'Real e Previsto para o Refrigerator nos 6 dias do conjunto de testes da Resid. 1', look_back = 50)


Aplicando o modelo (RNA-FC) na Residência 2


In [44]:
start = time.time()
X_2, y_2 = process_data(df[2], dates[2], ['mains_2','mains_1'], ['refrigerator_9'])
X_2_fc = X_2.reshape(-1, 100)
y_2 = y_2.reshape(-1)
print('Tempo total de execução (s): ', time.time() - start)
print(X_2_fc.shape, y_2.shape)


Tempo total de execução (s):  365.3649904727936
(316040, 100) (316040,)

In [45]:
pred_fc_50_h2 = fc_model.predict(X_2_fc).reshape(-1)
mse_loss_fc_50_2 = mse_loss(pred_fc_50_h2, y_2)
mae_loss_fc_50_2 = mae_loss(pred_fc_50_h2, y_2)
print('MSE no conjunto de teste: ', mse_loss_fc_50_2)
print('MAE no conjunto de teste: ', mae_loss_fc_50_2)


MSE no conjunto de teste:  11885.207650933737
MAE no conjunto de teste:  71.15573979187448

In [46]:
plot_each_app(df[2], dates[2], pred_fc_50_h2, y_2, 'RNA-FC retreinada (50 últimos dias): treinada na Resid. 1, prevendo na Resid. 2')


Modelo #3: LTSM-Neural Network (LSTM-NN)



In [47]:
def build_lstm_model(layers):
    model = Sequential()
    for i in range(len(layers) - 2):
        model.add(LSTM(
            input_dim=layers[i],
            output_dim=layers[i+1], 
            #stateful=True,
            return_sequences= True if i < len(layers) - 3 else False ))
        model.add(Dropout(0.3))
    
    model.add(Dense(layers[-1]))
    model.summary()
    plot_model(model)
    return model

model = build_lstm_model([2,64,128,256, 1])


_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_1 (LSTM)                (None, None, 64)          17152     
_________________________________________________________________
dropout_13 (Dropout)         (None, None, 64)          0         
_________________________________________________________________
lstm_2 (LSTM)                (None, None, 128)         98816     
_________________________________________________________________
dropout_14 (Dropout)         (None, None, 128)         0         
_________________________________________________________________
lstm_3 (LSTM)                (None, 256)               394240    
_________________________________________________________________
dropout_15 (Dropout)         (None, 256)               0         
_________________________________________________________________
dense_13 (Dense)             (None, 1)                 257       
=================================================================
Total params: 510,465
Trainable params: 510,465
Non-trainable params: 0
_________________________________________________________________

In [54]:
# Habilitando HPU
import tensorflow as tf
if tf.test.is_gpu_available(cuda_only=False, min_cuda_compute_capability=None):
    print('Ativando GPU!')
    config = tf.ConfigProto(device_count = {'GPU': 1}) 
    sess = tf.Session(config=config)


Out[54]:
True

In [56]:
start = time.time()
adam = Adam(lr = 5e-5)
lstm_model_path = "./resources/lstm_model.hdf5"
model.compile(loss='mean_squared_error', optimizer=adam)
checkpointer = ModelCheckpoint(filepath=lstm_model_path, verbose=0, save_best_only=True)
hist_lstm = model.fit(
            X_train,
            y_train[:,2],
            batch_size=512,
            verbose=1,
            nb_epoch=200,
            validation_split=0.3,
            callbacks=[checkpointer])
print('Tempo de treino (s): ', time.time() - start)


Train on 223188 samples, validate on 95653 samples
Epoch 1/200
223188/223188 [==============================] - 168s 753us/step - loss: 8943.5949 - val_loss: 8659.0293
Epoch 2/200
223188/223188 [==============================] - 102s 456us/step - loss: 8342.2909 - val_loss: 8244.4768
Epoch 3/200
223188/223188 [==============================] - 102s 456us/step - loss: 7855.1496 - val_loss: 7617.7990
Epoch 4/200
223188/223188 [==============================] - 102s 456us/step - loss: 7252.1834 - val_loss: 7139.0343
Epoch 5/200
223188/223188 [==============================] - 101s 454us/step - loss: 6808.4062 - val_loss: 6728.2662
Epoch 6/200
223188/223188 [==============================] - 102s 457us/step - loss: 6408.8027 - val_loss: 6357.2611
Epoch 7/200
223188/223188 [==============================] - 102s 456us/step - loss: 6040.4998 - val_loss: 6010.5497
Epoch 8/200
223188/223188 [==============================] - 102s 456us/step - loss: 5694.8238 - val_loss: 5666.4907
Epoch 9/200
223188/223188 [==============================] - 102s 458us/step - loss: 5372.1195 - val_loss: 5359.5406
Epoch 10/200
223188/223188 [==============================] - 101s 454us/step - loss: 5063.7256 - val_loss: 5068.7524
Epoch 11/200
223188/223188 [==============================] - 101s 454us/step - loss: 4774.4390 - val_loss: 4856.1894
Epoch 12/200
223188/223188 [==============================] - 101s 452us/step - loss: 4496.8720 - val_loss: 4521.8130
Epoch 13/200
223188/223188 [==============================] - 101s 452us/step - loss: 4237.2517 - val_loss: 4270.7337
Epoch 14/200
223188/223188 [==============================] - 101s 451us/step - loss: 3996.8497 - val_loss: 4034.2564
Epoch 15/200
223188/223188 [==============================] - 101s 453us/step - loss: 3761.5842 - val_loss: 3830.4612
Epoch 16/200
223188/223188 [==============================] - 101s 452us/step - loss: 3547.3104 - val_loss: 3599.1129
Epoch 17/200
223188/223188 [==============================] - 101s 453us/step - loss: 3320.0775 - val_loss: 3376.1214
Epoch 18/200
223188/223188 [==============================] - 101s 452us/step - loss: 3109.7418 - val_loss: 3205.0217
Epoch 19/200
223188/223188 [==============================] - 103s 462us/step - loss: 2919.6702 - val_loss: 3021.2742
Epoch 20/200
223188/223188 [==============================] - 108s 486us/step - loss: 2752.8231 - val_loss: 2859.5255
Epoch 21/200
223188/223188 [==============================] - 102s 457us/step - loss: 2594.9011 - val_loss: 2699.2747
Epoch 22/200
223188/223188 [==============================] - 104s 465us/step - loss: 2441.9413 - val_loss: 2580.6607
Epoch 23/200
223188/223188 [==============================] - 102s 457us/step - loss: 2295.1259 - val_loss: 2507.9918
Epoch 24/200
223188/223188 [==============================] - 102s 456us/step - loss: 2165.0950 - val_loss: 2314.1317
Epoch 25/200
223188/223188 [==============================] - 102s 455us/step - loss: 2056.5804 - val_loss: 2228.4889
Epoch 26/200
223188/223188 [==============================] - 102s 456us/step - loss: 1951.2077 - val_loss: 2144.0310
Epoch 27/200
223188/223188 [==============================] - 102s 456us/step - loss: 1866.2470 - val_loss: 2100.4280
Epoch 28/200
223188/223188 [==============================] - 101s 453us/step - loss: 1785.4761 - val_loss: 2025.2649
Epoch 29/200
223188/223188 [==============================] - 101s 452us/step - loss: 1712.8728 - val_loss: 1954.6393
Epoch 30/200
223188/223188 [==============================] - 101s 451us/step - loss: 1644.6488 - val_loss: 1903.1113
Epoch 31/200
223188/223188 [==============================] - 101s 450us/step - loss: 1593.6897 - val_loss: 1835.8557
Epoch 32/200
223188/223188 [==============================] - 101s 451us/step - loss: 1546.5056 - val_loss: 1790.2176
Epoch 33/200
223188/223188 [==============================] - 101s 452us/step - loss: 1510.4170 - val_loss: 1733.1980
Epoch 34/200
223188/223188 [==============================] - 101s 452us/step - loss: 1482.7492 - val_loss: 1742.9473
Epoch 35/200
223188/223188 [==============================] - 99s 442us/step - loss: 1449.3553 - val_loss: 1726.6395
Epoch 36/200
223188/223188 [==============================] - 91s 405us/step - loss: 1432.6871 - val_loss: 1685.0640
Epoch 37/200
223188/223188 [==============================] - 92s 413us/step - loss: 1418.3031 - val_loss: 1740.2415
Epoch 38/200
223188/223188 [==============================] - 93s 418us/step - loss: 1418.8506 - val_loss: 1731.0668
Epoch 39/200
223188/223188 [==============================] - 93s 415us/step - loss: 1403.9040 - val_loss: 1682.9555
Epoch 40/200
223188/223188 [==============================] - 92s 412us/step - loss: 1380.1365 - val_loss: 1624.1459
Epoch 41/200
223188/223188 [==============================] - 93s 416us/step - loss: 1354.6968 - val_loss: 1683.9599
Epoch 42/200
223188/223188 [==============================] - 92s 412us/step - loss: 1337.9524 - val_loss: 1652.3018
Epoch 43/200
223188/223188 [==============================] - 92s 411us/step - loss: 1331.8168 - val_loss: 1630.9545
Epoch 44/200
223188/223188 [==============================] - 92s 413us/step - loss: 1314.5780 - val_loss: 1682.1249
Epoch 45/200
223188/223188 [==============================] - 92s 413us/step - loss: 1307.0762 - val_loss: 1662.3963
Epoch 46/200
223188/223188 [==============================] - 92s 411us/step - loss: 1295.2622 - val_loss: 1799.8273
Epoch 47/200
223188/223188 [==============================] - 90s 405us/step - loss: 1288.0160 - val_loss: 1670.9439
Epoch 48/200
223188/223188 [==============================] - 92s 411us/step - loss: 1282.3924 - val_loss: 1608.9230
Epoch 49/200
223188/223188 [==============================] - 92s 411us/step - loss: 1269.1332 - val_loss: 1653.6792
Epoch 50/200
223188/223188 [==============================] - 92s 411us/step - loss: 1261.8044 - val_loss: 1668.0643
Epoch 51/200
223188/223188 [==============================] - 92s 414us/step - loss: 1248.2955 - val_loss: 1639.8839
Epoch 52/200
223188/223188 [==============================] - 92s 414us/step - loss: 1246.2052 - val_loss: 1656.9696
Epoch 53/200
223188/223188 [==============================] - 92s 412us/step - loss: 1254.2022 - val_loss: 1672.3666
Epoch 54/200
223188/223188 [==============================] - 92s 411us/step - loss: 1243.7531 - val_loss: 1640.8138
Epoch 55/200
223188/223188 [==============================] - 91s 410us/step - loss: 1217.6211 - val_loss: 1589.3328
Epoch 56/200
223188/223188 [==============================] - 91s 409us/step - loss: 1210.4856 - val_loss: 1678.3174
Epoch 57/200
223188/223188 [==============================] - 92s 412us/step - loss: 1200.8646 - val_loss: 1652.7657
Epoch 58/200
223188/223188 [==============================] - 92s 413us/step - loss: 1190.6781 - val_loss: 1669.3026
Epoch 59/200
223188/223188 [==============================] - 92s 413us/step - loss: 1189.0781 - val_loss: 1641.7516
Epoch 60/200
223188/223188 [==============================] - 92s 412us/step - loss: 1201.5369 - val_loss: 1539.0931
Epoch 61/200
223188/223188 [==============================] - 91s 410us/step - loss: 1184.6238 - val_loss: 1788.2007
Epoch 62/200
223188/223188 [==============================] - 92s 411us/step - loss: 1171.2015 - val_loss: 1608.1176
Epoch 63/200
223188/223188 [==============================] - 92s 413us/step - loss: 1175.9826 - val_loss: 1655.2032
Epoch 64/200
223188/223188 [==============================] - 92s 414us/step - loss: 1148.1471 - val_loss: 1570.7489
Epoch 65/200
223188/223188 [==============================] - 93s 417us/step - loss: 1146.7035 - val_loss: 1537.1535
Epoch 66/200
223188/223188 [==============================] - 92s 414us/step - loss: 1146.5029 - val_loss: 1529.7346
Epoch 67/200
223188/223188 [==============================] - 92s 413us/step - loss: 1136.1335 - val_loss: 1522.8046
Epoch 68/200
223188/223188 [==============================] - 92s 413us/step - loss: 1125.4388 - val_loss: 1517.3856
Epoch 69/200
223188/223188 [==============================] - 93s 415us/step - loss: 1188.5480 - val_loss: 1489.6754
Epoch 70/200
223188/223188 [==============================] - 91s 410us/step - loss: 1129.3407 - val_loss: 1565.3576
Epoch 71/200
223188/223188 [==============================] - 92s 412us/step - loss: 1123.0530 - val_loss: 1562.5574
Epoch 72/200
223188/223188 [==============================] - 92s 414us/step - loss: 1111.7344 - val_loss: 1568.7171
Epoch 73/200
223188/223188 [==============================] - 91s 408us/step - loss: 1117.7196 - val_loss: 1545.9733
Epoch 74/200
223188/223188 [==============================] - 93s 417us/step - loss: 1094.8740 - val_loss: 1576.3128
Epoch 75/200
223188/223188 [==============================] - 91s 409us/step - loss: 1095.8083 - val_loss: 1501.5863
Epoch 76/200
223188/223188 [==============================] - 92s 412us/step - loss: 1090.8198 - val_loss: 1546.7796
Epoch 77/200
223188/223188 [==============================] - 91s 408us/step - loss: 1082.8723 - val_loss: 1510.6154
Epoch 78/200
223188/223188 [==============================] - 93s 416us/step - loss: 1084.0157 - val_loss: 1544.8657
Epoch 79/200
223188/223188 [==============================] - 98s 440us/step - loss: 1072.7860 - val_loss: 1474.1361
Epoch 80/200
223188/223188 [==============================] - 101s 454us/step - loss: 1068.6321 - val_loss: 1712.9469
Epoch 81/200
223188/223188 [==============================] - 99s 441us/step - loss: 1079.1821 - val_loss: 1470.6729
Epoch 82/200
223188/223188 [==============================] - 91s 410us/step - loss: 1061.2951 - val_loss: 1503.4145
Epoch 83/200
223188/223188 [==============================] - 91s 408us/step - loss: 1062.7790 - val_loss: 1579.2975
Epoch 84/200
223188/223188 [==============================] - 92s 413us/step - loss: 1062.1057 - val_loss: 1534.3053
Epoch 85/200
223188/223188 [==============================] - 92s 412us/step - loss: 1054.3071 - val_loss: 1410.8361
Epoch 86/200
223188/223188 [==============================] - 92s 410us/step - loss: 1049.8619 - val_loss: 1537.1563
Epoch 87/200
223188/223188 [==============================] - 92s 410us/step - loss: 1043.1860 - val_loss: 1502.8590
Epoch 88/200
223188/223188 [==============================] - 92s 414us/step - loss: 1043.3761 - val_loss: 1646.1887
Epoch 89/200
223188/223188 [==============================] - 92s 413us/step - loss: 1061.7725 - val_loss: 1649.8972
Epoch 90/200
223188/223188 [==============================] - 92s 412us/step - loss: 1042.4061 - val_loss: 1377.0558
Epoch 91/200
223188/223188 [==============================] - 94s 420us/step - loss: 1033.7406 - val_loss: 1320.5473
Epoch 92/200
223188/223188 [==============================] - 92s 411us/step - loss: 1026.8144 - val_loss: 1423.8729
Epoch 93/200
223188/223188 [==============================] - 92s 411us/step - loss: 1019.9618 - val_loss: 1432.0475
Epoch 94/200
223188/223188 [==============================] - 93s 417us/step - loss: 1054.2099 - val_loss: 1426.5323
Epoch 95/200
223188/223188 [==============================] - 91s 409us/step - loss: 1017.6584 - val_loss: 1444.0367
Epoch 96/200
223188/223188 [==============================] - 93s 417us/step - loss: 1008.2875 - val_loss: 1441.0623
Epoch 97/200
223188/223188 [==============================] - 92s 412us/step - loss: 1011.0447 - val_loss: 1306.3959
Epoch 98/200
223188/223188 [==============================] - 92s 413us/step - loss: 1006.5833 - val_loss: 1355.0281
Epoch 99/200
223188/223188 [==============================] - 92s 413us/step - loss: 998.1744 - val_loss: 1449.9772
Epoch 100/200
223188/223188 [==============================] - 92s 413us/step - loss: 995.4586 - val_loss: 1522.4703
Epoch 101/200
223188/223188 [==============================] - 93s 416us/step - loss: 996.4196 - val_loss: 1378.3551
Epoch 102/200
223188/223188 [==============================] - 91s 406us/step - loss: 993.4401 - val_loss: 1378.2480
Epoch 103/200
223188/223188 [==============================] - 93s 418us/step - loss: 1020.2383 - val_loss: 1357.0375
Epoch 104/200
223188/223188 [==============================] - 93s 416us/step - loss: 1007.2298 - val_loss: 1392.6196
Epoch 105/200
223188/223188 [==============================] - 92s 412us/step - loss: 987.3569 - val_loss: 1350.6904
Epoch 106/200
223188/223188 [==============================] - 93s 417us/step - loss: 980.5170 - val_loss: 2734.7795
Epoch 107/200
223188/223188 [==============================] - 94s 420us/step - loss: 1002.5396 - val_loss: 1360.0958
Epoch 108/200
223188/223188 [==============================] - 92s 413us/step - loss: 975.7612 - val_loss: 1380.5693
Epoch 109/200
223188/223188 [==============================] - 93s 415us/step - loss: 973.4745 - val_loss: 1389.5233
Epoch 110/200
223188/223188 [==============================] - 92s 413us/step - loss: 966.4909 - val_loss: 1347.7783
Epoch 111/200
223188/223188 [==============================] - 92s 414us/step - loss: 976.1065 - val_loss: 1523.5033
Epoch 112/200
223188/223188 [==============================] - 93s 416us/step - loss: 972.6841 - val_loss: 1516.5198
Epoch 113/200
223188/223188 [==============================] - 92s 414us/step - loss: 970.8146 - val_loss: 1377.5597
Epoch 114/200
223188/223188 [==============================] - 92s 414us/step - loss: 959.5547 - val_loss: 1344.6413
Epoch 115/200
223188/223188 [==============================] - 92s 414us/step - loss: 976.2355 - val_loss: 1321.4030
Epoch 116/200
223188/223188 [==============================] - 93s 415us/step - loss: 968.9234 - val_loss: 1422.1058
Epoch 117/200
223188/223188 [==============================] - 93s 415us/step - loss: 956.3795 - val_loss: 1343.5809
Epoch 118/200
223188/223188 [==============================] - 93s 417us/step - loss: 960.0646 - val_loss: 1371.4218
Epoch 119/200
223188/223188 [==============================] - 93s 415us/step - loss: 956.9103 - val_loss: 1467.8114
Epoch 120/200
223188/223188 [==============================] - 92s 411us/step - loss: 964.4630 - val_loss: 1369.6220
Epoch 121/200
223188/223188 [==============================] - 93s 416us/step - loss: 949.6657 - val_loss: 1412.3082
Epoch 122/200
223188/223188 [==============================] - 93s 417us/step - loss: 952.8962 - val_loss: 1395.6611
Epoch 123/200
223188/223188 [==============================] - 92s 414us/step - loss: 960.1421 - val_loss: 1473.1208
Epoch 124/200
223188/223188 [==============================] - 94s 419us/step - loss: 950.9336 - val_loss: 1376.8374
Epoch 125/200
223188/223188 [==============================] - 93s 416us/step - loss: 940.1650 - val_loss: 1265.1566
Epoch 126/200
223188/223188 [==============================] - 92s 414us/step - loss: 950.4345 - val_loss: 1385.1536
Epoch 127/200
223188/223188 [==============================] - 93s 417us/step - loss: 940.6836 - val_loss: 1449.7949
Epoch 128/200
223188/223188 [==============================] - 94s 420us/step - loss: 950.8589 - val_loss: 1341.7591
Epoch 129/200
223188/223188 [==============================] - 94s 422us/step - loss: 936.2275 - val_loss: 1430.0361
Epoch 130/200
223188/223188 [==============================] - 94s 421us/step - loss: 934.2227 - val_loss: 1349.3153
Epoch 131/200
223188/223188 [==============================] - 93s 418us/step - loss: 938.2121 - val_loss: 1417.5871
Epoch 132/200
223188/223188 [==============================] - 92s 411us/step - loss: 941.0465 - val_loss: 1449.7352
Epoch 133/200
223188/223188 [==============================] - 93s 416us/step - loss: 940.2050 - val_loss: 1381.9114
Epoch 134/200
223188/223188 [==============================] - 92s 411us/step - loss: 940.5854 - val_loss: 1406.1963
Epoch 135/200
223188/223188 [==============================] - 92s 412us/step - loss: 941.5712 - val_loss: 1486.7710
Epoch 136/200
223188/223188 [==============================] - 93s 416us/step - loss: 937.0101 - val_loss: 1366.7320
Epoch 137/200
223188/223188 [==============================] - 93s 415us/step - loss: 936.3842 - val_loss: 1328.4757
Epoch 138/200
223188/223188 [==============================] - 92s 413us/step - loss: 930.6814 - val_loss: 1373.4209
Epoch 139/200
223188/223188 [==============================] - 93s 415us/step - loss: 926.3472 - val_loss: 1353.0253
Epoch 140/200
223188/223188 [==============================] - 92s 413us/step - loss: 926.4477 - val_loss: 1438.5609
Epoch 141/200
223188/223188 [==============================] - 92s 411us/step - loss: 920.4436 - val_loss: 1859.5334
Epoch 142/200
223188/223188 [==============================] - 91s 408us/step - loss: 921.6020 - val_loss: 1370.3122
Epoch 143/200
223188/223188 [==============================] - 92s 413us/step - loss: 934.3075 - val_loss: 1412.5617
Epoch 144/200
223188/223188 [==============================] - 92s 414us/step - loss: 935.8863 - val_loss: 1580.4763
Epoch 145/200
223188/223188 [==============================] - 93s 416us/step - loss: 930.8981 - val_loss: 1322.6951
Epoch 146/200
223188/223188 [==============================] - 92s 414us/step - loss: 921.4858 - val_loss: 1765.2611
Epoch 147/200
223188/223188 [==============================] - 92s 414us/step - loss: 920.7796 - val_loss: 1304.3785
Epoch 148/200
223188/223188 [==============================] - 93s 417us/step - loss: 914.9987 - val_loss: 1790.7970
Epoch 149/200
223188/223188 [==============================] - 92s 412us/step - loss: 915.1988 - val_loss: 1675.1267
Epoch 150/200
223188/223188 [==============================] - 92s 412us/step - loss: 922.5304 - val_loss: 1301.9705
Epoch 151/200
223188/223188 [==============================] - 91s 407us/step - loss: 921.5703 - val_loss: 1443.6646
Epoch 152/200
223188/223188 [==============================] - 92s 410us/step - loss: 915.7250 - val_loss: 1565.5451
Epoch 153/200
223188/223188 [==============================] - 92s 413us/step - loss: 924.8372 - val_loss: 1362.0814
Epoch 154/200
223188/223188 [==============================] - 92s 412us/step - loss: 910.4015 - val_loss: 1443.8049
Epoch 155/200
223188/223188 [==============================] - 93s 418us/step - loss: 913.8489 - val_loss: 1360.5005
Epoch 156/200
223188/223188 [==============================] - 92s 414us/step - loss: 927.0364 - val_loss: 1333.6451
Epoch 157/200
223188/223188 [==============================] - 91s 409us/step - loss: 904.8803 - val_loss: 1366.4442
Epoch 158/200
223188/223188 [==============================] - 92s 412us/step - loss: 925.3147 - val_loss: 1362.5195
Epoch 159/200
223188/223188 [==============================] - 92s 414us/step - loss: 908.2308 - val_loss: 1373.4119
Epoch 160/200
223188/223188 [==============================] - 92s 412us/step - loss: 906.9199 - val_loss: 1398.0061
Epoch 161/200
223188/223188 [==============================] - 92s 412us/step - loss: 907.0850 - val_loss: 1415.6353
Epoch 162/200
223188/223188 [==============================] - 91s 409us/step - loss: 899.4917 - val_loss: 1350.4315
Epoch 163/200
223188/223188 [==============================] - 91s 409us/step - loss: 900.8761 - val_loss: 1316.9357
Epoch 164/200
223188/223188 [==============================] - 91s 409us/step - loss: 901.6746 - val_loss: 1380.5614
Epoch 165/200
223188/223188 [==============================] - 92s 411us/step - loss: 894.2485 - val_loss: 1507.2293
Epoch 166/200
223188/223188 [==============================] - 92s 412us/step - loss: 899.3563 - val_loss: 1323.6596
Epoch 167/200
223188/223188 [==============================] - 91s 409us/step - loss: 893.9272 - val_loss: 1403.0451
Epoch 168/200
223188/223188 [==============================] - 92s 412us/step - loss: 903.1564 - val_loss: 1556.4731
Epoch 169/200
223188/223188 [==============================] - 92s 412us/step - loss: 890.4273 - val_loss: 1501.5147
Epoch 170/200
223188/223188 [==============================] - 91s 409us/step - loss: 889.5180 - val_loss: 1301.5587
Epoch 171/200
223188/223188 [==============================] - 91s 410us/step - loss: 898.6636 - val_loss: 1298.1781
Epoch 172/200
223188/223188 [==============================] - 93s 415us/step - loss: 891.2156 - val_loss: 1404.9187
Epoch 173/200
223188/223188 [==============================] - 91s 410us/step - loss: 910.4658 - val_loss: 1273.9385
Epoch 174/200
223188/223188 [==============================] - 92s 411us/step - loss: 892.6312 - val_loss: 1470.2750
Epoch 175/200
223188/223188 [==============================] - 93s 416us/step - loss: 894.0665 - val_loss: 1374.2275
Epoch 176/200
223188/223188 [==============================] - 91s 409us/step - loss: 881.0720 - val_loss: 1337.6743
Epoch 177/200
223188/223188 [==============================] - 92s 410us/step - loss: 891.8075 - val_loss: 1323.0394
Epoch 178/200
223188/223188 [==============================] - 92s 412us/step - loss: 889.4111 - val_loss: 1441.4951
Epoch 179/200
223188/223188 [==============================] - 101s 455us/step - loss: 888.8894 - val_loss: 1318.8626
Epoch 180/200
223188/223188 [==============================] - 93s 417us/step - loss: 882.7221 - val_loss: 1276.4567
Epoch 181/200
223188/223188 [==============================] - 92s 412us/step - loss: 888.3250 - val_loss: 1387.3242
Epoch 182/200
223188/223188 [==============================] - 93s 416us/step - loss: 884.5539 - val_loss: 1356.4089
Epoch 183/200
223188/223188 [==============================] - 92s 413us/step - loss: 878.5238 - val_loss: 1328.4911
Epoch 184/200
223188/223188 [==============================] - 92s 414us/step - loss: 883.3976 - val_loss: 1405.9532
Epoch 185/200
223188/223188 [==============================] - 93s 416us/step - loss: 878.3099 - val_loss: 1417.3281
Epoch 186/200
223188/223188 [==============================] - 92s 414us/step - loss: 873.1693 - val_loss: 1378.6180
Epoch 187/200
223188/223188 [==============================] - 91s 409us/step - loss: 883.9835 - val_loss: 1354.4516
Epoch 188/200
223188/223188 [==============================] - 92s 413us/step - loss: 884.9449 - val_loss: 1369.6398
Epoch 189/200
223188/223188 [==============================] - 93s 416us/step - loss: 875.6288 - val_loss: 1331.6496
Epoch 190/200
223188/223188 [==============================] - 92s 412us/step - loss: 889.0567 - val_loss: 1407.5199
Epoch 191/200
223188/223188 [==============================] - 92s 412us/step - loss: 872.6053 - val_loss: 1345.6381
Epoch 192/200
223188/223188 [==============================] - 92s 413us/step - loss: 870.0927 - val_loss: 1339.1862
Epoch 193/200
223188/223188 [==============================] - 92s 414us/step - loss: 878.5493 - val_loss: 1244.5383
Epoch 194/200
223188/223188 [==============================] - 93s 416us/step - loss: 872.9197 - val_loss: 1495.9915
Epoch 195/200
223188/223188 [==============================] - 91s 407us/step - loss: 886.2711 - val_loss: 1345.6660
Epoch 196/200
223188/223188 [==============================] - 92s 411us/step - loss: 876.1474 - val_loss: 1330.5983
Epoch 197/200
223188/223188 [==============================] - 93s 415us/step - loss: 885.6285 - val_loss: 1317.5131
Epoch 198/200
223188/223188 [==============================] - 92s 412us/step - loss: 880.7529 - val_loss: 1354.1674
Epoch 199/200
223188/223188 [==============================] - 93s 416us/step - loss: 870.8371 - val_loss: 1329.5663
Epoch 200/200
223188/223188 [==============================] - 92s 413us/step - loss: 864.8168 - val_loss: 1324.1560
Tempo de treino (s):  18869.911180257797

In [57]:
train_loss = hist_lstm.history['loss']
val_loss = hist_lstm.history['val_loss']

plot_losses(train_loss, val_loss)



In [58]:
model = load_model(lstm_model_path)
pred_lstm = model.predict(X_test).reshape(-1)
print(pred_lstm.shape)


(86757,)

In [59]:
mse_loss_lstm = mse_loss(pred_lstm, y_test[:,2])
mae_loss_lstm = mae_loss(pred_lstm, y_test[:,2])
print('MSE no conjunto de teste: ', mse_loss_lstm)
print('MAE no conjunto de teste: ', mae_loss_lstm)


MSE no conjunto de teste:  1580.8408967132395
MAE no conjunto de teste:  8.588973829278297

In [60]:
plot_each_app(df1_test, dates[1][17:], pred_lstm, y_test[:,2], 
              'Real e Previsto para os 6 dias do conjunto de testes da Resid. 1', look_back = 50)


Aplicando LSTM-NN


In [61]:
pred_lstm_h2 = model.predict(X_2).reshape(-1)
mse_loss_lstm_h2 = mse_loss(pred_lstm_h2, y_2)
mae_loss_lstm_h2 = mae_loss(pred_lstm_h2, y_2)
print('MSE no conjunto de teste: ', mse_loss_lstm_h2)
print('MAE no conjunto de teste: ', mae_loss_lstm_h2)


MSE no conjunto de teste:  10125.031587792071
MAE no conjunto de teste:  53.54507986167525

In [62]:
plot_each_app(df[2], dates[2], pred_lstm_h2, y_2, 
              'Real e Previsto para os 6 dias do conjunto de testes da Resid. 2', look_back = 50)



In [ ]:


In [ ]: